Test Report: Hyper-V_Windows 18804

                    
                      3f87824b0e7c024b0b0e0095d3da0d45809b8090:2024-05-07:34370
                    
                

Test fail (14/209)

x
+
TestAddons/parallel/Registry (86.48s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 22.4237ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-9pbmp" [16728740-775e-40f8-a349-338c94a4598a] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.008773s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-cnbdm" [e62db972-6bdf-4460-a207-134f85233fa2] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.0191091s
addons_test.go:340: (dbg) Run:  kubectl --context addons-809100 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-809100 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-809100 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.4293495s)
addons_test.go:359: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-809100 ip
addons_test.go:359: (dbg) Done: out/minikube-windows-amd64.exe -p addons-809100 ip: (2.5315568s)
addons_test.go:364: expected stderr to be -empty- but got: *"W0507 18:05:38.753576   12888 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube5\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n"* .  args "out/minikube-windows-amd64.exe -p addons-809100 ip"
2024/05/07 18:05:41 [DEBUG] GET http://172.19.135.136:5000
2024/05/07 18:05:43 [ERR] GET http://172.19.135.136:5000 request failed: Get "http://172.19.135.136:5000": dial tcp 172.19.135.136:5000: connectex: No connection could be made because the target machine actively refused it.
2024/05/07 18:05:43 [DEBUG] GET http://172.19.135.136:5000: retrying in 1s (4 left)
addons_test.go:388: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-809100 addons disable registry --alsologtostderr -v=1
addons_test.go:388: (dbg) Done: out/minikube-windows-amd64.exe -p addons-809100 addons disable registry --alsologtostderr -v=1: (13.3712339s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-809100 -n addons-809100
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p addons-809100 -n addons-809100: (11.1145282s)
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-809100 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p addons-809100 logs -n 25: (8.2078186s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-702700 | minikube5\jenkins | v1.33.0 | 07 May 24 17:58 UTC |                     |
	|         | -p download-only-702700                                                                     |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                                                |                      |                   |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | minikube5\jenkins | v1.33.0 | 07 May 24 17:58 UTC | 07 May 24 17:58 UTC |
	| delete  | -p download-only-702700                                                                     | download-only-702700 | minikube5\jenkins | v1.33.0 | 07 May 24 17:58 UTC | 07 May 24 17:58 UTC |
	| start   | -o=json --download-only                                                                     | download-only-081300 | minikube5\jenkins | v1.33.0 | 07 May 24 17:58 UTC |                     |
	|         | -p download-only-081300                                                                     |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.30.0                                                                |                      |                   |         |                     |                     |
	|         | --container-runtime=docker                                                                  |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | --all                                                                                       | minikube             | minikube5\jenkins | v1.33.0 | 07 May 24 17:59 UTC | 07 May 24 17:59 UTC |
	| delete  | -p download-only-081300                                                                     | download-only-081300 | minikube5\jenkins | v1.33.0 | 07 May 24 17:59 UTC | 07 May 24 17:59 UTC |
	| delete  | -p download-only-702700                                                                     | download-only-702700 | minikube5\jenkins | v1.33.0 | 07 May 24 17:59 UTC | 07 May 24 17:59 UTC |
	| delete  | -p download-only-081300                                                                     | download-only-081300 | minikube5\jenkins | v1.33.0 | 07 May 24 17:59 UTC | 07 May 24 17:59 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-347400 | minikube5\jenkins | v1.33.0 | 07 May 24 17:59 UTC |                     |
	|         | binary-mirror-347400                                                                        |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |                   |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |                   |         |                     |                     |
	|         | http://127.0.0.1:49786                                                                      |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                                                             |                      |                   |         |                     |                     |
	| delete  | -p binary-mirror-347400                                                                     | binary-mirror-347400 | minikube5\jenkins | v1.33.0 | 07 May 24 17:59 UTC | 07 May 24 17:59 UTC |
	| addons  | disable dashboard -p                                                                        | addons-809100        | minikube5\jenkins | v1.33.0 | 07 May 24 17:59 UTC |                     |
	|         | addons-809100                                                                               |                      |                   |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-809100        | minikube5\jenkins | v1.33.0 | 07 May 24 17:59 UTC |                     |
	|         | addons-809100                                                                               |                      |                   |         |                     |                     |
	| start   | -p addons-809100 --wait=true                                                                | addons-809100        | minikube5\jenkins | v1.33.0 | 07 May 24 17:59 UTC | 07 May 24 18:05 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |                   |         |                     |                     |
	|         | --addons=registry                                                                           |                      |                   |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |                   |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |                   |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |                   |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |                   |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |                   |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |                   |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |                   |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |                   |         |                     |                     |
	|         | --addons=yakd --driver=hyperv                                                               |                      |                   |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |                   |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |                   |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |                   |         |                     |                     |
	| addons  | addons-809100 addons                                                                        | addons-809100        | minikube5\jenkins | v1.33.0 | 07 May 24 18:05 UTC | 07 May 24 18:05 UTC |
	|         | disable metrics-server                                                                      |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	| ssh     | addons-809100 ssh cat                                                                       | addons-809100        | minikube5\jenkins | v1.33.0 | 07 May 24 18:05 UTC | 07 May 24 18:05 UTC |
	|         | /opt/local-path-provisioner/pvc-36cf19a0-0753-4154-9461-803993989c88_default_test-pvc/file1 |                      |                   |         |                     |                     |
	| ip      | addons-809100 ip                                                                            | addons-809100        | minikube5\jenkins | v1.33.0 | 07 May 24 18:05 UTC | 07 May 24 18:05 UTC |
	| addons  | addons-809100 addons disable                                                                | addons-809100        | minikube5\jenkins | v1.33.0 | 07 May 24 18:05 UTC |                     |
	|         | storage-provisioner-rancher                                                                 |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	| addons  | addons-809100 addons disable                                                                | addons-809100        | minikube5\jenkins | v1.33.0 | 07 May 24 18:05 UTC | 07 May 24 18:06 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |                   |         |                     |                     |
	|         | -v=1                                                                                        |                      |                   |         |                     |                     |
	| addons  | addons-809100 addons disable                                                                | addons-809100        | minikube5\jenkins | v1.33.0 | 07 May 24 18:06 UTC | 07 May 24 18:06 UTC |
	|         | registry --alsologtostderr                                                                  |                      |                   |         |                     |                     |
	|         | -v=1                                                                                        |                      |                   |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-809100        | minikube5\jenkins | v1.33.0 | 07 May 24 18:06 UTC |                     |
	|         | -p addons-809100                                                                            |                      |                   |         |                     |                     |
	| addons  | addons-809100 addons                                                                        | addons-809100        | minikube5\jenkins | v1.33.0 | 07 May 24 18:06 UTC |                     |
	|         | disable csi-hostpath-driver                                                                 |                      |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/07 17:59:14
	Running on machine: minikube5
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0507 17:59:14.532729    1744 out.go:291] Setting OutFile to fd 864 ...
	I0507 17:59:14.533416    1744 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 17:59:14.533416    1744 out.go:304] Setting ErrFile to fd 868...
	I0507 17:59:14.533416    1744 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 17:59:14.550803    1744 out.go:298] Setting JSON to false
	I0507 17:59:14.553341    1744 start.go:129] hostinfo: {"hostname":"minikube5","uptime":20672,"bootTime":1715084081,"procs":187,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0507 17:59:14.553341    1744 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0507 17:59:14.560420    1744 out.go:177] * [addons-809100] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0507 17:59:14.563425    1744 notify.go:220] Checking for updates...
	I0507 17:59:14.565969    1744 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0507 17:59:14.568944    1744 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0507 17:59:14.571363    1744 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0507 17:59:14.573732    1744 out.go:177]   - MINIKUBE_LOCATION=18804
	I0507 17:59:14.576129    1744 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0507 17:59:14.579183    1744 driver.go:392] Setting default libvirt URI to qemu:///system
	I0507 17:59:19.694756    1744 out.go:177] * Using the hyperv driver based on user configuration
	I0507 17:59:19.701864    1744 start.go:297] selected driver: hyperv
	I0507 17:59:19.701864    1744 start.go:901] validating driver "hyperv" against <nil>
	I0507 17:59:19.701974    1744 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0507 17:59:19.743841    1744 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0507 17:59:19.744714    1744 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0507 17:59:19.744714    1744 cni.go:84] Creating CNI manager for ""
	I0507 17:59:19.744714    1744 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0507 17:59:19.744714    1744 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0507 17:59:19.744714    1744 start.go:340] cluster config:
	{Name:addons-809100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:addons-809100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0507 17:59:19.745674    1744 iso.go:125] acquiring lock: {Name:mk4977609d05da04fcecf95837b3381fb1950afd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0507 17:59:19.752021    1744 out.go:177] * Starting "addons-809100" primary control-plane node in "addons-809100" cluster
	I0507 17:59:19.755550    1744 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0507 17:59:19.755550    1744 preload.go:147] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0507 17:59:19.755550    1744 cache.go:56] Caching tarball of preloaded images
	I0507 17:59:19.756040    1744 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0507 17:59:19.756040    1744 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0507 17:59:19.756750    1744 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\config.json ...
	I0507 17:59:19.757084    1744 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\config.json: {Name:mkc2e1f1bbe4937e4d134979311382d3ee0dee7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 17:59:19.757371    1744 start.go:360] acquireMachinesLock for addons-809100: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 17:59:19.757371    1744 start.go:364] duration metric: took 0s to acquireMachinesLock for "addons-809100"
	I0507 17:59:19.758290    1744 start.go:93] Provisioning new machine with config: &{Name:addons-809100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.30.0 ClusterName:addons-809100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptio
ns:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0507 17:59:19.758290    1744 start.go:125] createHost starting for "" (driver="hyperv")
	I0507 17:59:19.760027    1744 out.go:204] * Creating hyperv VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0507 17:59:19.761018    1744 start.go:159] libmachine.API.Create for "addons-809100" (driver="hyperv")
	I0507 17:59:19.761018    1744 client.go:168] LocalClient.Create starting
	I0507 17:59:19.761018    1744 main.go:141] libmachine: Creating CA: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0507 17:59:19.842665    1744 main.go:141] libmachine: Creating client certificate: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0507 17:59:20.101117    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0507 17:59:22.074315    1744 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0507 17:59:22.074403    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 17:59:22.074506    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0507 17:59:23.643838    1744 main.go:141] libmachine: [stdout =====>] : False
	
	I0507 17:59:23.643957    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 17:59:23.644045    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0507 17:59:24.991095    1744 main.go:141] libmachine: [stdout =====>] : True
	
	I0507 17:59:24.991095    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 17:59:24.991282    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0507 17:59:28.561876    1744 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0507 17:59:28.561876    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 17:59:28.564849    1744 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso...
	I0507 17:59:28.873150    1744 main.go:141] libmachine: Creating SSH key...
	I0507 17:59:29.308190    1744 main.go:141] libmachine: Creating VM...
	I0507 17:59:29.308190    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0507 17:59:31.974991    1744 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0507 17:59:31.974991    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 17:59:31.974991    1744 main.go:141] libmachine: Using switch "Default Switch"
	I0507 17:59:31.974991    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0507 17:59:33.574875    1744 main.go:141] libmachine: [stdout =====>] : True
	
	I0507 17:59:33.574875    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 17:59:33.575174    1744 main.go:141] libmachine: Creating VHD
	I0507 17:59:33.575174    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-809100\fixed.vhd' -SizeBytes 10MB -Fixed
	I0507 17:59:37.187875    1744 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-809100\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 20B2966F-CB3F-4A7C-8EBF-AD3201C6E9D5
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0507 17:59:37.188032    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 17:59:37.188032    1744 main.go:141] libmachine: Writing magic tar header
	I0507 17:59:37.188149    1744 main.go:141] libmachine: Writing SSH key tar header
	I0507 17:59:37.198106    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-809100\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-809100\disk.vhd' -VHDType Dynamic -DeleteSource
	I0507 17:59:40.209979    1744 main.go:141] libmachine: [stdout =====>] : 
	I0507 17:59:40.210052    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 17:59:40.210124    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-809100\disk.vhd' -SizeBytes 20000MB
	I0507 17:59:42.675912    1744 main.go:141] libmachine: [stdout =====>] : 
	I0507 17:59:42.676172    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 17:59:42.676172    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM addons-809100 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-809100' -SwitchName 'Default Switch' -MemoryStartupBytes 4000MB
	I0507 17:59:46.089572    1744 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	addons-809100 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0507 17:59:46.089572    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 17:59:46.089572    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName addons-809100 -DynamicMemoryEnabled $false
	I0507 17:59:48.183472    1744 main.go:141] libmachine: [stdout =====>] : 
	I0507 17:59:48.184331    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 17:59:48.184331    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor addons-809100 -Count 2
	I0507 17:59:50.191377    1744 main.go:141] libmachine: [stdout =====>] : 
	I0507 17:59:50.191616    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 17:59:50.191830    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName addons-809100 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-809100\boot2docker.iso'
	I0507 17:59:52.544824    1744 main.go:141] libmachine: [stdout =====>] : 
	I0507 17:59:52.545790    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 17:59:52.545865    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName addons-809100 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-809100\disk.vhd'
	I0507 17:59:54.919126    1744 main.go:141] libmachine: [stdout =====>] : 
	I0507 17:59:54.919126    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 17:59:54.919126    1744 main.go:141] libmachine: Starting VM...
	I0507 17:59:54.920185    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM addons-809100
	I0507 17:59:57.844663    1744 main.go:141] libmachine: [stdout =====>] : 
	I0507 17:59:57.844663    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 17:59:57.845151    1744 main.go:141] libmachine: Waiting for host to start...
	I0507 17:59:57.845151    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-809100 ).state
	I0507 17:59:59.900718    1744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 17:59:59.900718    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 17:59:59.900718    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-809100 ).networkadapters[0]).ipaddresses[0]
	I0507 18:00:02.245438    1744 main.go:141] libmachine: [stdout =====>] : 
	I0507 18:00:02.245438    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:00:03.249474    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-809100 ).state
	I0507 18:00:05.285819    1744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:00:05.285819    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:00:05.286872    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-809100 ).networkadapters[0]).ipaddresses[0]
	I0507 18:00:07.655811    1744 main.go:141] libmachine: [stdout =====>] : 
	I0507 18:00:07.655847    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:00:08.670012    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-809100 ).state
	I0507 18:00:10.679545    1744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:00:10.680036    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:00:10.680164    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-809100 ).networkadapters[0]).ipaddresses[0]
	I0507 18:00:12.979368    1744 main.go:141] libmachine: [stdout =====>] : 
	I0507 18:00:12.979535    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:00:13.981172    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-809100 ).state
	I0507 18:00:15.972196    1744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:00:15.972196    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:00:15.972281    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-809100 ).networkadapters[0]).ipaddresses[0]
	I0507 18:00:18.277544    1744 main.go:141] libmachine: [stdout =====>] : 
	I0507 18:00:18.277544    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:00:19.284214    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-809100 ).state
	I0507 18:00:21.267301    1744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:00:21.267356    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:00:21.267356    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-809100 ).networkadapters[0]).ipaddresses[0]
	I0507 18:00:23.681475    1744 main.go:141] libmachine: [stdout =====>] : 172.19.135.136
	
	I0507 18:00:23.681475    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:00:23.681565    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-809100 ).state
	I0507 18:00:25.655484    1744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:00:25.655484    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:00:25.655484    1744 machine.go:94] provisionDockerMachine start ...
	I0507 18:00:25.656429    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-809100 ).state
	I0507 18:00:27.612209    1744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:00:27.612209    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:00:27.612283    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-809100 ).networkadapters[0]).ipaddresses[0]
	I0507 18:00:29.953731    1744 main.go:141] libmachine: [stdout =====>] : 172.19.135.136
	
	I0507 18:00:29.953731    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:00:29.958128    1744 main.go:141] libmachine: Using SSH client type: native
	I0507 18:00:29.967431    1744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.135.136 22 <nil> <nil>}
	I0507 18:00:29.967431    1744 main.go:141] libmachine: About to run SSH command:
	hostname
	I0507 18:00:30.101087    1744 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0507 18:00:30.101226    1744 buildroot.go:166] provisioning hostname "addons-809100"
	I0507 18:00:30.101314    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-809100 ).state
	I0507 18:00:32.056627    1744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:00:32.056627    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:00:32.057042    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-809100 ).networkadapters[0]).ipaddresses[0]
	I0507 18:00:34.416228    1744 main.go:141] libmachine: [stdout =====>] : 172.19.135.136
	
	I0507 18:00:34.416228    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:00:34.420285    1744 main.go:141] libmachine: Using SSH client type: native
	I0507 18:00:34.420285    1744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.135.136 22 <nil> <nil>}
	I0507 18:00:34.420804    1744 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-809100 && echo "addons-809100" | sudo tee /etc/hostname
	I0507 18:00:34.587590    1744 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-809100
	
	I0507 18:00:34.587873    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-809100 ).state
	I0507 18:00:36.575412    1744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:00:36.575681    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:00:36.575766    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-809100 ).networkadapters[0]).ipaddresses[0]
	I0507 18:00:38.926892    1744 main.go:141] libmachine: [stdout =====>] : 172.19.135.136
	
	I0507 18:00:38.926892    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:00:38.931704    1744 main.go:141] libmachine: Using SSH client type: native
	I0507 18:00:38.932053    1744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.135.136 22 <nil> <nil>}
	I0507 18:00:38.932126    1744 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-809100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-809100/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-809100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0507 18:00:39.075855    1744 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0507 18:00:39.075967    1744 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0507 18:00:39.075967    1744 buildroot.go:174] setting up certificates
	I0507 18:00:39.075967    1744 provision.go:84] configureAuth start
	I0507 18:00:39.076069    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-809100 ).state
	I0507 18:00:41.003957    1744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:00:41.003957    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:00:41.004592    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-809100 ).networkadapters[0]).ipaddresses[0]
	I0507 18:00:43.339426    1744 main.go:141] libmachine: [stdout =====>] : 172.19.135.136
	
	I0507 18:00:43.339426    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:00:43.340023    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-809100 ).state
	I0507 18:00:45.271006    1744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:00:45.271989    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:00:45.271989    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-809100 ).networkadapters[0]).ipaddresses[0]
	I0507 18:00:47.585767    1744 main.go:141] libmachine: [stdout =====>] : 172.19.135.136
	
	I0507 18:00:47.585767    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:00:47.586181    1744 provision.go:143] copyHostCerts
	I0507 18:00:47.586318    1744 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0507 18:00:47.587625    1744 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0507 18:00:47.588588    1744 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0507 18:00:47.589314    1744 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.addons-809100 san=[127.0.0.1 172.19.135.136 addons-809100 localhost minikube]
	I0507 18:00:47.699365    1744 provision.go:177] copyRemoteCerts
	I0507 18:00:47.708813    1744 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0507 18:00:47.708901    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-809100 ).state
	I0507 18:00:49.661863    1744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:00:49.661863    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:00:49.662471    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-809100 ).networkadapters[0]).ipaddresses[0]
	I0507 18:00:51.978424    1744 main.go:141] libmachine: [stdout =====>] : 172.19.135.136
	
	I0507 18:00:51.978424    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:00:51.979264    1744 sshutil.go:53] new ssh client: &{IP:172.19.135.136 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-809100\id_rsa Username:docker}
	I0507 18:00:52.086372    1744 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.3772575s)
	I0507 18:00:52.087454    1744 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0507 18:00:52.131619    1744 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0507 18:00:52.174860    1744 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0507 18:00:52.222019    1744 provision.go:87] duration metric: took 13.1451478s to configureAuth
	I0507 18:00:52.222019    1744 buildroot.go:189] setting minikube options for container-runtime
	I0507 18:00:52.222643    1744 config.go:182] Loaded profile config "addons-809100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 18:00:52.222643    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-809100 ).state
	I0507 18:00:54.163891    1744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:00:54.164525    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:00:54.164525    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-809100 ).networkadapters[0]).ipaddresses[0]
	I0507 18:00:56.498499    1744 main.go:141] libmachine: [stdout =====>] : 172.19.135.136
	
	I0507 18:00:56.498499    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:00:56.501878    1744 main.go:141] libmachine: Using SSH client type: native
	I0507 18:00:56.502454    1744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.135.136 22 <nil> <nil>}
	I0507 18:00:56.502454    1744 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0507 18:00:56.628984    1744 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0507 18:00:56.629060    1744 buildroot.go:70] root file system type: tmpfs
	I0507 18:00:56.629336    1744 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0507 18:00:56.629407    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-809100 ).state
	I0507 18:00:58.596262    1744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:00:58.596262    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:00:58.597312    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-809100 ).networkadapters[0]).ipaddresses[0]
	I0507 18:01:00.939880    1744 main.go:141] libmachine: [stdout =====>] : 172.19.135.136
	
	I0507 18:01:00.939880    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:01:00.946018    1744 main.go:141] libmachine: Using SSH client type: native
	I0507 18:01:00.946624    1744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.135.136 22 <nil> <nil>}
	I0507 18:01:00.946624    1744 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0507 18:01:01.105729    1744 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0507 18:01:01.105729    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-809100 ).state
	I0507 18:01:03.071994    1744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:01:03.071994    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:01:03.072878    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-809100 ).networkadapters[0]).ipaddresses[0]
	I0507 18:01:05.414100    1744 main.go:141] libmachine: [stdout =====>] : 172.19.135.136
	
	I0507 18:01:05.414100    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:01:05.418251    1744 main.go:141] libmachine: Using SSH client type: native
	I0507 18:01:05.418777    1744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.135.136 22 <nil> <nil>}
	I0507 18:01:05.418861    1744 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0507 18:01:07.492995    1744 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0507 18:01:07.492995    1744 machine.go:97] duration metric: took 41.8346328s to provisionDockerMachine
	I0507 18:01:07.492995    1744 client.go:171] duration metric: took 1m47.7245923s to LocalClient.Create
	I0507 18:01:07.492995    1744 start.go:167] duration metric: took 1m47.7245923s to libmachine.API.Create "addons-809100"
	I0507 18:01:07.492995    1744 start.go:293] postStartSetup for "addons-809100" (driver="hyperv")
	I0507 18:01:07.492995    1744 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0507 18:01:07.501942    1744 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0507 18:01:07.501942    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-809100 ).state
	I0507 18:01:09.490247    1744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:01:09.490314    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:01:09.490314    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-809100 ).networkadapters[0]).ipaddresses[0]
	I0507 18:01:11.815223    1744 main.go:141] libmachine: [stdout =====>] : 172.19.135.136
	
	I0507 18:01:11.815223    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:01:11.815494    1744 sshutil.go:53] new ssh client: &{IP:172.19.135.136 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-809100\id_rsa Username:docker}
	I0507 18:01:11.922535    1744 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.4202884s)
	I0507 18:01:11.933523    1744 ssh_runner.go:195] Run: cat /etc/os-release
	I0507 18:01:11.941130    1744 info.go:137] Remote host: Buildroot 2023.02.9
	I0507 18:01:11.941207    1744 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0507 18:01:11.941590    1744 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0507 18:01:11.941853    1744 start.go:296] duration metric: took 4.4485518s for postStartSetup
	I0507 18:01:11.945289    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-809100 ).state
	I0507 18:01:13.894262    1744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:01:13.894262    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:01:13.894906    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-809100 ).networkadapters[0]).ipaddresses[0]
	I0507 18:01:16.151542    1744 main.go:141] libmachine: [stdout =====>] : 172.19.135.136
	
	I0507 18:01:16.151542    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:01:16.152571    1744 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\config.json ...
	I0507 18:01:16.154927    1744 start.go:128] duration metric: took 1m56.3886549s to createHost
	I0507 18:01:16.155130    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-809100 ).state
	I0507 18:01:18.012761    1744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:01:18.012761    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:01:18.013505    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-809100 ).networkadapters[0]).ipaddresses[0]
	I0507 18:01:20.237664    1744 main.go:141] libmachine: [stdout =====>] : 172.19.135.136
	
	I0507 18:01:20.237664    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:01:20.242641    1744 main.go:141] libmachine: Using SSH client type: native
	I0507 18:01:20.242641    1744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.135.136 22 <nil> <nil>}
	I0507 18:01:20.242641    1744 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0507 18:01:20.380243    1744 main.go:141] libmachine: SSH cmd err, output: <nil>: 1715104880.587271165
	
	I0507 18:01:20.380243    1744 fix.go:216] guest clock: 1715104880.587271165
	I0507 18:01:20.380243    1744 fix.go:229] Guest: 2024-05-07 18:01:20.587271165 +0000 UTC Remote: 2024-05-07 18:01:16.1550218 +0000 UTC m=+121.742823401 (delta=4.432249365s)
	I0507 18:01:20.380395    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-809100 ).state
	I0507 18:01:22.212295    1744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:01:22.212295    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:01:22.213367    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-809100 ).networkadapters[0]).ipaddresses[0]
	I0507 18:01:24.391913    1744 main.go:141] libmachine: [stdout =====>] : 172.19.135.136
	
	I0507 18:01:24.391913    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:01:24.397373    1744 main.go:141] libmachine: Using SSH client type: native
	I0507 18:01:24.397951    1744 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.135.136 22 <nil> <nil>}
	I0507 18:01:24.397951    1744 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1715104880
	I0507 18:01:24.541854    1744 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue May  7 18:01:20 UTC 2024
	
	I0507 18:01:24.545102    1744 fix.go:236] clock set: Tue May  7 18:01:20 UTC 2024
	 (err=<nil>)
	I0507 18:01:24.545205    1744 start.go:83] releasing machines lock for "addons-809100", held for 2m4.7784085s
	I0507 18:01:24.545450    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-809100 ).state
	I0507 18:01:26.474697    1744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:01:26.474697    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:01:26.474832    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-809100 ).networkadapters[0]).ipaddresses[0]
	I0507 18:01:28.683587    1744 main.go:141] libmachine: [stdout =====>] : 172.19.135.136
	
	I0507 18:01:28.683587    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:01:28.687224    1744 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0507 18:01:28.687224    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-809100 ).state
	I0507 18:01:28.696848    1744 ssh_runner.go:195] Run: cat /version.json
	I0507 18:01:28.696848    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-809100 ).state
	I0507 18:01:30.624696    1744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:01:30.624834    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:01:30.624834    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-809100 ).networkadapters[0]).ipaddresses[0]
	I0507 18:01:30.625488    1744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:01:30.625488    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:01:30.625488    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-809100 ).networkadapters[0]).ipaddresses[0]
	I0507 18:01:32.953303    1744 main.go:141] libmachine: [stdout =====>] : 172.19.135.136
	
	I0507 18:01:32.953376    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:01:32.953721    1744 sshutil.go:53] new ssh client: &{IP:172.19.135.136 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-809100\id_rsa Username:docker}
	I0507 18:01:32.969339    1744 main.go:141] libmachine: [stdout =====>] : 172.19.135.136
	
	I0507 18:01:32.970006    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:01:32.970344    1744 sshutil.go:53] new ssh client: &{IP:172.19.135.136 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-809100\id_rsa Username:docker}
	I0507 18:01:33.043463    1744 ssh_runner.go:235] Completed: cat /version.json: (4.3463155s)
	I0507 18:01:33.051229    1744 ssh_runner.go:195] Run: systemctl --version
	I0507 18:01:33.139321    1744 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.4516656s)
	I0507 18:01:33.148348    1744 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0507 18:01:33.156983    1744 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0507 18:01:33.165105    1744 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0507 18:01:33.188149    1744 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0507 18:01:33.188149    1744 start.go:494] detecting cgroup driver to use...
	I0507 18:01:33.188149    1744 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0507 18:01:33.228893    1744 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0507 18:01:33.257650    1744 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0507 18:01:33.275058    1744 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0507 18:01:33.285674    1744 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0507 18:01:33.314177    1744 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0507 18:01:33.339344    1744 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0507 18:01:33.364233    1744 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0507 18:01:33.388360    1744 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0507 18:01:33.412768    1744 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0507 18:01:33.439630    1744 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0507 18:01:33.468744    1744 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0507 18:01:33.496077    1744 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0507 18:01:33.521783    1744 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0507 18:01:33.547220    1744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 18:01:33.706674    1744 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0507 18:01:33.732528    1744 start.go:494] detecting cgroup driver to use...
	I0507 18:01:33.741958    1744 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0507 18:01:33.777812    1744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0507 18:01:33.811792    1744 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0507 18:01:33.846416    1744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0507 18:01:33.878094    1744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0507 18:01:33.906963    1744 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0507 18:01:33.966338    1744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0507 18:01:33.986554    1744 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0507 18:01:34.024133    1744 ssh_runner.go:195] Run: which cri-dockerd
	I0507 18:01:34.038603    1744 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0507 18:01:34.055534    1744 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0507 18:01:34.092637    1744 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0507 18:01:34.259980    1744 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0507 18:01:34.432799    1744 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0507 18:01:34.433068    1744 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0507 18:01:34.470417    1744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 18:01:34.644547    1744 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0507 18:01:37.126551    1744 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.4817795s)
	I0507 18:01:37.137845    1744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0507 18:01:37.171110    1744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0507 18:01:37.202816    1744 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0507 18:01:37.383928    1744 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0507 18:01:37.553762    1744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 18:01:37.727425    1744 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0507 18:01:37.762704    1744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0507 18:01:37.791442    1744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 18:01:37.954498    1744 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0507 18:01:38.045058    1744 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0507 18:01:38.053915    1744 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0507 18:01:38.062730    1744 start.go:562] Will wait 60s for crictl version
	I0507 18:01:38.070241    1744 ssh_runner.go:195] Run: which crictl
	I0507 18:01:38.084870    1744 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0507 18:01:38.130515    1744 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0507 18:01:38.140898    1744 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0507 18:01:38.173910    1744 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0507 18:01:38.205403    1744 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0507 18:01:38.205635    1744 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0507 18:01:38.209034    1744 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0507 18:01:38.209034    1744 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0507 18:01:38.209034    1744 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0507 18:01:38.209034    1744 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:a3:a5:4f Flags:up|broadcast|multicast|running}
	I0507 18:01:38.211670    1744 ip.go:210] interface addr: fe80::1edb:f5fd:c218:d8d2/64
	I0507 18:01:38.211670    1744 ip.go:210] interface addr: 172.19.128.1/20
	I0507 18:01:38.219227    1744 ssh_runner.go:195] Run: grep 172.19.128.1	host.minikube.internal$ /etc/hosts
	I0507 18:01:38.223902    1744 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.128.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0507 18:01:38.244416    1744 kubeadm.go:877] updating cluster {Name:addons-809100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.0 ClusterName:addons-809100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.135.136 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0507 18:01:38.244748    1744 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0507 18:01:38.253408    1744 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0507 18:01:38.271549    1744 docker.go:685] Got preloaded images: 
	I0507 18:01:38.271549    1744 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.0 wasn't preloaded
	I0507 18:01:38.279977    1744 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0507 18:01:38.307078    1744 ssh_runner.go:195] Run: which lz4
	I0507 18:01:38.324906    1744 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0507 18:01:38.331510    1744 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0507 18:01:38.331711    1744 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359556852 bytes)
	I0507 18:01:40.040279    1744 docker.go:649] duration metric: took 1.7270999s to copy over tarball
	I0507 18:01:40.049586    1744 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0507 18:01:45.078130    1744 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (5.0281975s)
	I0507 18:01:45.078130    1744 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0507 18:01:45.134134    1744 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0507 18:01:45.150632    1744 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0507 18:01:45.188958    1744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 18:01:45.377888    1744 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0507 18:01:51.059587    1744 ssh_runner.go:235] Completed: sudo systemctl restart docker: (5.6811998s)
	I0507 18:01:51.071591    1744 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0507 18:01:51.092227    1744 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0507 18:01:51.092227    1744 cache_images.go:84] Images are preloaded, skipping loading
	I0507 18:01:51.092227    1744 kubeadm.go:928] updating node { 172.19.135.136 8443 v1.30.0 docker true true} ...
	I0507 18:01:51.092227    1744 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-809100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.135.136
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:addons-809100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0507 18:01:51.101184    1744 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0507 18:01:51.130651    1744 cni.go:84] Creating CNI manager for ""
	I0507 18:01:51.130721    1744 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0507 18:01:51.130721    1744 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0507 18:01:51.130721    1744 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.19.135.136 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-809100 NodeName:addons-809100 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.19.135.136"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.19.135.136 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0507 18:01:51.130721    1744 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.19.135.136
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "addons-809100"
	  kubeletExtraArgs:
	    node-ip: 172.19.135.136
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.19.135.136"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0507 18:01:51.139747    1744 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0507 18:01:51.156571    1744 binaries.go:44] Found k8s binaries, skipping transfer
	I0507 18:01:51.166620    1744 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0507 18:01:51.182877    1744 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0507 18:01:51.209626    1744 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0507 18:01:51.233648    1744 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0507 18:01:51.269266    1744 ssh_runner.go:195] Run: grep 172.19.135.136	control-plane.minikube.internal$ /etc/hosts
	I0507 18:01:51.273837    1744 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.135.136	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0507 18:01:51.304374    1744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 18:01:51.460515    1744 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0507 18:01:51.490243    1744 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100 for IP: 172.19.135.136
	I0507 18:01:51.490368    1744 certs.go:194] generating shared ca certs ...
	I0507 18:01:51.490449    1744 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 18:01:51.490797    1744 certs.go:240] generating "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0507 18:01:51.634003    1744 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt ...
	I0507 18:01:51.634003    1744 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt: {Name:mkecc83abf7dbcd2f2b0fd63bac36f2a7fe554cc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 18:01:51.635011    1744 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key ...
	I0507 18:01:51.635011    1744 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key: {Name:mk56e2872d5c5070a04729e59e76e7398d15f15d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 18:01:51.637039    1744 certs.go:240] generating "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0507 18:01:51.804880    1744 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt ...
	I0507 18:01:51.804880    1744 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt: {Name:mkfcb9723e08b8d76b8a2e73084c13f930548396 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 18:01:51.806976    1744 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key ...
	I0507 18:01:51.806976    1744 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key: {Name:mkd23bfd48ce10457a367dee40c81533c5cc7b5f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 18:01:51.808337    1744 certs.go:256] generating profile certs ...
	I0507 18:01:51.808930    1744 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\client.key
	I0507 18:01:51.808930    1744 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\client.crt with IP's: []
	I0507 18:01:52.179323    1744 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\client.crt ...
	I0507 18:01:52.179323    1744 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\client.crt: {Name:mk30866002ac938a43373911463ad61c821fbf09 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 18:01:52.180432    1744 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\client.key ...
	I0507 18:01:52.180432    1744 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\client.key: {Name:mkc0c048424e93ea218c431df85390cd9e6b5ba4 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 18:01:52.182545    1744 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\apiserver.key.2ea63880
	I0507 18:01:52.182857    1744 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\apiserver.crt.2ea63880 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.19.135.136]
	I0507 18:01:52.408036    1744 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\apiserver.crt.2ea63880 ...
	I0507 18:01:52.408036    1744 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\apiserver.crt.2ea63880: {Name:mk3b8b5a9f6e100914635cc7f160eba8100dfcd6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 18:01:52.409098    1744 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\apiserver.key.2ea63880 ...
	I0507 18:01:52.409098    1744 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\apiserver.key.2ea63880: {Name:mkef2f77fcc9f37a26f132dfafee56f32af37af6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 18:01:52.410199    1744 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\apiserver.crt.2ea63880 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\apiserver.crt
	I0507 18:01:52.423087    1744 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\apiserver.key.2ea63880 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\apiserver.key
	I0507 18:01:52.424080    1744 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\proxy-client.key
	I0507 18:01:52.425017    1744 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\proxy-client.crt with IP's: []
	I0507 18:01:52.666042    1744 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\proxy-client.crt ...
	I0507 18:01:52.666042    1744 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\proxy-client.crt: {Name:mk05937ada8637bf6ac0f4807db179d542a1a6d2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 18:01:52.667313    1744 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\proxy-client.key ...
	I0507 18:01:52.667313    1744 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\proxy-client.key: {Name:mkfe830707aa389b0a29d714c78e901fc3bbd016 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 18:01:52.678219    1744 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0507 18:01:52.679344    1744 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0507 18:01:52.679614    1744 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0507 18:01:52.679821    1744 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0507 18:01:52.681055    1744 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0507 18:01:52.723177    1744 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0507 18:01:52.765222    1744 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0507 18:01:52.805648    1744 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0507 18:01:52.844237    1744 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0507 18:01:52.883096    1744 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0507 18:01:52.917163    1744 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0507 18:01:52.956483    1744 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0507 18:01:52.996155    1744 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0507 18:01:53.035055    1744 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0507 18:01:53.073889    1744 ssh_runner.go:195] Run: openssl version
	I0507 18:01:53.089922    1744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0507 18:01:53.116689    1744 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0507 18:01:53.123914    1744 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  7 18:01 /usr/share/ca-certificates/minikubeCA.pem
	I0507 18:01:53.131688    1744 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0507 18:01:53.147756    1744 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0507 18:01:53.176092    1744 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0507 18:01:53.183139    1744 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0507 18:01:53.183418    1744 kubeadm.go:391] StartCluster: {Name:addons-809100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0
ClusterName:addons-809100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.135.136 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mo
untPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0507 18:01:53.189165    1744 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0507 18:01:53.216337    1744 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0507 18:01:53.242990    1744 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0507 18:01:53.267442    1744 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0507 18:01:53.284207    1744 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0507 18:01:53.284207    1744 kubeadm.go:156] found existing configuration files:
	
	I0507 18:01:53.292423    1744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0507 18:01:53.306647    1744 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0507 18:01:53.315259    1744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0507 18:01:53.337806    1744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0507 18:01:53.352893    1744 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0507 18:01:53.361603    1744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0507 18:01:53.387308    1744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0507 18:01:53.402138    1744 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0507 18:01:53.410710    1744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0507 18:01:53.435756    1744 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0507 18:01:53.450563    1744 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0507 18:01:53.460484    1744 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0507 18:01:53.476356    1744 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0507 18:01:53.692528    1744 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0507 18:02:05.398312    1744 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0507 18:02:05.398479    1744 kubeadm.go:309] [preflight] Running pre-flight checks
	I0507 18:02:05.398479    1744 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0507 18:02:05.399003    1744 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0507 18:02:05.399152    1744 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0507 18:02:05.399252    1744 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0507 18:02:05.401861    1744 out.go:204]   - Generating certificates and keys ...
	I0507 18:02:05.402033    1744 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0507 18:02:05.402204    1744 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0507 18:02:05.402382    1744 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0507 18:02:05.402545    1744 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0507 18:02:05.402862    1744 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0507 18:02:05.403052    1744 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0507 18:02:05.403239    1744 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0507 18:02:05.403239    1744 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-809100 localhost] and IPs [172.19.135.136 127.0.0.1 ::1]
	I0507 18:02:05.403239    1744 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0507 18:02:05.403847    1744 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-809100 localhost] and IPs [172.19.135.136 127.0.0.1 ::1]
	I0507 18:02:05.403847    1744 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0507 18:02:05.403847    1744 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0507 18:02:05.403847    1744 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0507 18:02:05.404470    1744 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0507 18:02:05.404541    1744 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0507 18:02:05.404541    1744 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0507 18:02:05.404541    1744 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0507 18:02:05.404541    1744 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0507 18:02:05.405124    1744 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0507 18:02:05.405124    1744 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0507 18:02:05.405124    1744 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0507 18:02:05.408005    1744 out.go:204]   - Booting up control plane ...
	I0507 18:02:05.408005    1744 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0507 18:02:05.408005    1744 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0507 18:02:05.408005    1744 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0507 18:02:05.408777    1744 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0507 18:02:05.408777    1744 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0507 18:02:05.408777    1744 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0507 18:02:05.408777    1744 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0507 18:02:05.408777    1744 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0507 18:02:05.408777    1744 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002927827s
	I0507 18:02:05.408777    1744 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0507 18:02:05.409802    1744 kubeadm.go:309] [api-check] The API server is healthy after 6.002131898s
	I0507 18:02:05.409802    1744 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0507 18:02:05.409802    1744 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0507 18:02:05.409802    1744 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0507 18:02:05.410733    1744 kubeadm.go:309] [mark-control-plane] Marking the node addons-809100 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0507 18:02:05.410733    1744 kubeadm.go:309] [bootstrap-token] Using token: qb22on.lz4q5f381cft94yh
	I0507 18:02:05.414580    1744 out.go:204]   - Configuring RBAC rules ...
	I0507 18:02:05.414580    1744 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0507 18:02:05.415091    1744 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0507 18:02:05.415157    1744 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0507 18:02:05.415157    1744 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0507 18:02:05.415741    1744 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0507 18:02:05.415741    1744 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0507 18:02:05.416287    1744 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0507 18:02:05.416287    1744 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0507 18:02:05.416287    1744 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0507 18:02:05.416287    1744 kubeadm.go:309] 
	I0507 18:02:05.416287    1744 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0507 18:02:05.416287    1744 kubeadm.go:309] 
	I0507 18:02:05.416854    1744 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0507 18:02:05.416854    1744 kubeadm.go:309] 
	I0507 18:02:05.416854    1744 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0507 18:02:05.416854    1744 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0507 18:02:05.416854    1744 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0507 18:02:05.416854    1744 kubeadm.go:309] 
	I0507 18:02:05.416854    1744 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0507 18:02:05.416854    1744 kubeadm.go:309] 
	I0507 18:02:05.417377    1744 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0507 18:02:05.417432    1744 kubeadm.go:309] 
	I0507 18:02:05.417432    1744 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0507 18:02:05.417432    1744 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0507 18:02:05.417432    1744 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0507 18:02:05.417432    1744 kubeadm.go:309] 
	I0507 18:02:05.418028    1744 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0507 18:02:05.418028    1744 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0507 18:02:05.418028    1744 kubeadm.go:309] 
	I0507 18:02:05.418028    1744 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token qb22on.lz4q5f381cft94yh \
	I0507 18:02:05.418562    1744 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:931f752ca063cc161db9d00a66e1e235f9a673b9dc0e49228e9ec99d810de7b1 \
	I0507 18:02:05.418626    1744 kubeadm.go:309] 	--control-plane 
	I0507 18:02:05.418626    1744 kubeadm.go:309] 
	I0507 18:02:05.418626    1744 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0507 18:02:05.418626    1744 kubeadm.go:309] 
	I0507 18:02:05.418626    1744 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token qb22on.lz4q5f381cft94yh \
	I0507 18:02:05.419196    1744 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:931f752ca063cc161db9d00a66e1e235f9a673b9dc0e49228e9ec99d810de7b1 
	I0507 18:02:05.419223    1744 cni.go:84] Creating CNI manager for ""
	I0507 18:02:05.419223    1744 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0507 18:02:05.421603    1744 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0507 18:02:05.432510    1744 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0507 18:02:05.448991    1744 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0507 18:02:05.481370    1744 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0507 18:02:05.491548    1744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-809100 minikube.k8s.io/updated_at=2024_05_07T18_02_05_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=a2bee053733709aad5480b65159f65519e411d9f minikube.k8s.io/name=addons-809100 minikube.k8s.io/primary=true
	I0507 18:02:05.494142    1744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:02:05.502234    1744 ops.go:34] apiserver oom_adj: -16
	I0507 18:02:05.649971    1744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:02:06.161066    1744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:02:06.661598    1744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:02:07.166383    1744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:02:07.664143    1744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:02:08.163592    1744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:02:08.666460    1744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:02:09.163326    1744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:02:09.666211    1744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:02:10.156613    1744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:02:10.656964    1744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:02:11.162150    1744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:02:11.666986    1744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:02:12.166422    1744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:02:12.654104    1744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:02:13.159942    1744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:02:13.661139    1744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:02:14.153593    1744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:02:14.668464    1744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:02:15.167723    1744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:02:15.653107    1744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:02:16.159349    1744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:02:16.659597    1744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:02:17.162395    1744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:02:17.664043    1744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:02:18.154307    1744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:02:18.658653    1744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:02:19.150519    1744 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:02:19.243408    1744 kubeadm.go:1107] duration metric: took 13.7609587s to wait for elevateKubeSystemPrivileges
	W0507 18:02:19.243408    1744 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0507 18:02:19.243408    1744 kubeadm.go:393] duration metric: took 26.0581945s to StartCluster
	I0507 18:02:19.243408    1744 settings.go:142] acquiring lock: {Name:mk66ab2e0bae08b477c4ed9caa26e688e6ce3248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 18:02:19.243971    1744 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0507 18:02:19.245503    1744 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 18:02:19.247212    1744 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0507 18:02:19.247649    1744 start.go:234] Will wait 6m0s for node &{Name: IP:172.19.135.136 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0507 18:02:19.247715    1744 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0507 18:02:19.253152    1744 out.go:177] * Verifying Kubernetes components...
	I0507 18:02:19.247886    1744 addons.go:69] Setting yakd=true in profile "addons-809100"
	I0507 18:02:19.247886    1744 addons.go:69] Setting inspektor-gadget=true in profile "addons-809100"
	I0507 18:02:19.247886    1744 addons.go:69] Setting metrics-server=true in profile "addons-809100"
	I0507 18:02:19.247886    1744 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-809100"
	I0507 18:02:19.247951    1744 addons.go:69] Setting registry=true in profile "addons-809100"
	I0507 18:02:19.247951    1744 addons.go:69] Setting storage-provisioner=true in profile "addons-809100"
	I0507 18:02:19.247951    1744 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-809100"
	I0507 18:02:19.247951    1744 addons.go:69] Setting volumesnapshots=true in profile "addons-809100"
	I0507 18:02:19.247998    1744 addons.go:69] Setting default-storageclass=true in profile "addons-809100"
	I0507 18:02:19.247998    1744 addons.go:69] Setting cloud-spanner=true in profile "addons-809100"
	I0507 18:02:19.247998    1744 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-809100"
	I0507 18:02:19.247998    1744 addons.go:69] Setting helm-tiller=true in profile "addons-809100"
	I0507 18:02:19.247998    1744 addons.go:69] Setting gcp-auth=true in profile "addons-809100"
	I0507 18:02:19.247998    1744 addons.go:69] Setting ingress=true in profile "addons-809100"
	I0507 18:02:19.247998    1744 addons.go:69] Setting ingress-dns=true in profile "addons-809100"
	I0507 18:02:19.247998    1744 config.go:182] Loaded profile config "addons-809100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 18:02:19.253152    1744 addons.go:234] Setting addon yakd=true in "addons-809100"
	I0507 18:02:19.260112    1744 addons.go:234] Setting addon storage-provisioner=true in "addons-809100"
	I0507 18:02:19.260112    1744 addons.go:234] Setting addon metrics-server=true in "addons-809100"
	I0507 18:02:19.260112    1744 addons.go:234] Setting addon registry=true in "addons-809100"
	I0507 18:02:19.260112    1744 addons.go:234] Setting addon inspektor-gadget=true in "addons-809100"
	I0507 18:02:19.260112    1744 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-809100"
	I0507 18:02:19.260112    1744 host.go:66] Checking if "addons-809100" exists ...
	I0507 18:02:19.260112    1744 host.go:66] Checking if "addons-809100" exists ...
	I0507 18:02:19.260112    1744 host.go:66] Checking if "addons-809100" exists ...
	I0507 18:02:19.260112    1744 addons.go:234] Setting addon helm-tiller=true in "addons-809100"
	I0507 18:02:19.260112    1744 host.go:66] Checking if "addons-809100" exists ...
	I0507 18:02:19.260679    1744 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-809100"
	I0507 18:02:19.260679    1744 addons.go:234] Setting addon volumesnapshots=true in "addons-809100"
	I0507 18:02:19.261206    1744 host.go:66] Checking if "addons-809100" exists ...
	I0507 18:02:19.261496    1744 addons.go:234] Setting addon cloud-spanner=true in "addons-809100"
	I0507 18:02:19.261496    1744 host.go:66] Checking if "addons-809100" exists ...
	I0507 18:02:19.261496    1744 addons.go:234] Setting addon ingress=true in "addons-809100"
	I0507 18:02:19.262100    1744 host.go:66] Checking if "addons-809100" exists ...
	I0507 18:02:19.262100    1744 addons.go:234] Setting addon ingress-dns=true in "addons-809100"
	I0507 18:02:19.262100    1744 host.go:66] Checking if "addons-809100" exists ...
	I0507 18:02:19.262730    1744 mustload.go:65] Loading cluster: addons-809100
	I0507 18:02:19.262730    1744 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-809100"
	I0507 18:02:19.260112    1744 host.go:66] Checking if "addons-809100" exists ...
	I0507 18:02:19.262730    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-809100 ).state
	I0507 18:02:19.263385    1744 host.go:66] Checking if "addons-809100" exists ...
	I0507 18:02:19.263438    1744 config.go:182] Loaded profile config "addons-809100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 18:02:19.260112    1744 host.go:66] Checking if "addons-809100" exists ...
	I0507 18:02:19.260112    1744 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-809100"
	I0507 18:02:19.263767    1744 host.go:66] Checking if "addons-809100" exists ...
	I0507 18:02:19.263767    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-809100 ).state
	I0507 18:02:19.265512    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-809100 ).state
	I0507 18:02:19.267855    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-809100 ).state
	I0507 18:02:19.267855    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-809100 ).state
	I0507 18:02:19.268202    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-809100 ).state
	I0507 18:02:19.269108    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-809100 ).state
	I0507 18:02:19.269108    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-809100 ).state
	I0507 18:02:19.270118    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-809100 ).state
	I0507 18:02:19.270118    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-809100 ).state
	I0507 18:02:19.271143    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-809100 ).state
	I0507 18:02:19.271143    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-809100 ).state
	I0507 18:02:19.271502    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-809100 ).state
	I0507 18:02:19.271502    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-809100 ).state
	I0507 18:02:19.287065    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-809100 ).state
	I0507 18:02:19.355434    1744 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 18:02:19.805705    1744 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.19.128.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0507 18:02:20.491532    1744 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.1360193s)
	I0507 18:02:20.505730    1744 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0507 18:02:21.509550    1744 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.19.128.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.7034422s)
	I0507 18:02:21.509606    1744 start.go:946] {"host.minikube.internal": 172.19.128.1} host record injected into CoreDNS's ConfigMap
	I0507 18:02:21.514915    1744 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.0091155s)
	I0507 18:02:21.516512    1744 node_ready.go:35] waiting up to 6m0s for node "addons-809100" to be "Ready" ...
	I0507 18:02:21.738347    1744 node_ready.go:49] node "addons-809100" has status "Ready":"True"
	I0507 18:02:21.738347    1744 node_ready.go:38] duration metric: took 221.8189ms for node "addons-809100" to be "Ready" ...
	I0507 18:02:21.738347    1744 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0507 18:02:21.985622    1744 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-c4lxf" in "kube-system" namespace to be "Ready" ...
	I0507 18:02:22.126667    1744 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-809100" context rescaled to 1 replicas
	I0507 18:02:24.088845    1744 pod_ready.go:102] pod "coredns-7db6d8ff4d-c4lxf" in "kube-system" namespace has status "Ready":"False"
	I0507 18:02:24.781252    1744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:02:24.781880    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:02:24.791275    1744 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0507 18:02:24.809283    1744 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0507 18:02:24.817012    1744 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0507 18:02:24.822958    1744 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0507 18:02:24.827748    1744 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0507 18:02:24.831762    1744 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0507 18:02:24.835823    1744 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0507 18:02:24.839823    1744 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0507 18:02:24.843471    1744 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0507 18:02:24.843471    1744 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0507 18:02:24.843471    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-809100 ).state
	I0507 18:02:24.863201    1744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:02:24.863201    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:02:24.869625    1744 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0507 18:02:24.876980    1744 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0507 18:02:24.876980    1744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0507 18:02:24.876980    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-809100 ).state
	I0507 18:02:24.956283    1744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:02:24.956283    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:02:24.956283    1744 host.go:66] Checking if "addons-809100" exists ...
	I0507 18:02:25.147613    1744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:02:25.147613    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:02:25.153124    1744 addons.go:234] Setting addon default-storageclass=true in "addons-809100"
	I0507 18:02:25.153124    1744 host.go:66] Checking if "addons-809100" exists ...
	I0507 18:02:25.154145    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-809100 ).state
	I0507 18:02:25.195741    1744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:02:25.195741    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:02:25.202025    1744 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.17
	I0507 18:02:25.211738    1744 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0507 18:02:25.211738    1744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0507 18:02:25.211738    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-809100 ).state
	I0507 18:02:25.217992    1744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:02:25.219010    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:02:25.219010    1744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:02:25.222006    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:02:25.221004    1744 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-809100"
	I0507 18:02:25.221004    1744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:02:25.224012    1744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:02:25.226994    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:02:25.229977    1744 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0507 18:02:25.227976    1744 host.go:66] Checking if "addons-809100" exists ...
	I0507 18:02:25.227976    1744 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0507 18:02:25.227976    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:02:25.238776    1744 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0507 18:02:25.238776    1744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0507 18:02:25.238878    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-809100 ).state
	I0507 18:02:25.241093    1744 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0507 18:02:25.233011    1744 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0507 18:02:25.234995    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-809100 ).state
	I0507 18:02:25.237453    1744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:02:25.240465    1744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:02:25.244633    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:02:25.247970    1744 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.1
	I0507 18:02:25.245177    1744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0507 18:02:25.245239    1744 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0507 18:02:25.245239    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:02:25.255550    1744 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0507 18:02:25.252268    1744 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0507 18:02:25.255550    1744 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0507 18:02:25.255550    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-809100 ).state
	I0507 18:02:25.252268    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-809100 ).state
	I0507 18:02:25.256324    1744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:02:25.256324    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:02:25.252268    1744 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0507 18:02:25.260905    1744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:02:25.263947    1744 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.27.0
	I0507 18:02:25.264061    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-809100 ).state
	I0507 18:02:25.268062    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:02:25.270539    1744 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0507 18:02:25.293842    1744 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.15.0
	I0507 18:02:25.303705    1744 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0507 18:02:25.304696    1744 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0507 18:02:25.317767    1744 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0507 18:02:25.317767    1744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0507 18:02:25.317767    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-809100 ).state
	I0507 18:02:25.318925    1744 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0507 18:02:25.318925    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-809100 ).state
	I0507 18:02:25.319521    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-809100 ).state
	I0507 18:02:26.094336    1744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:02:26.094336    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:02:26.117920    1744 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0507 18:02:26.180424    1744 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.1
	I0507 18:02:26.213977    1744 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0507 18:02:26.224978    1744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:02:26.235268    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:02:26.234571    1744 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0507 18:02:26.249608    1744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0507 18:02:26.249608    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-809100 ).state
	I0507 18:02:26.253618    1744 out.go:177]   - Using image docker.io/registry:2.8.3
	I0507 18:02:26.258608    1744 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0507 18:02:26.261618    1744 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0507 18:02:26.261618    1744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0507 18:02:26.261618    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-809100 ).state
	I0507 18:02:26.456311    1744 pod_ready.go:102] pod "coredns-7db6d8ff4d-c4lxf" in "kube-system" namespace has status "Ready":"False"
	I0507 18:02:28.798373    1744 pod_ready.go:102] pod "coredns-7db6d8ff4d-c4lxf" in "kube-system" namespace has status "Ready":"False"
	I0507 18:02:30.073714    1744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:02:30.073714    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:02:30.073714    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-809100 ).networkadapters[0]).ipaddresses[0]
	I0507 18:02:30.075713    1744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:02:30.075713    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:02:30.075713    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-809100 ).networkadapters[0]).ipaddresses[0]
	I0507 18:02:30.079175    1744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:02:30.079175    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:02:30.079175    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-809100 ).networkadapters[0]).ipaddresses[0]
	I0507 18:02:30.091692    1744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:02:30.091692    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:02:30.091692    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-809100 ).networkadapters[0]).ipaddresses[0]
	I0507 18:02:30.202750    1744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:02:30.202750    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:02:30.202888    1744 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0507 18:02:30.202888    1744 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0507 18:02:30.202888    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-809100 ).state
	I0507 18:02:30.471909    1744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:02:30.471909    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:02:30.476503    1744 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0507 18:02:30.478969    1744 out.go:177]   - Using image docker.io/busybox:stable
	I0507 18:02:30.482574    1744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:02:30.483384    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:02:30.483562    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-809100 ).networkadapters[0]).ipaddresses[0]
	I0507 18:02:30.483562    1744 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0507 18:02:30.483562    1744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0507 18:02:30.483562    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-809100 ).state
	I0507 18:02:30.487744    1744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:02:30.487744    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:02:30.487744    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-809100 ).networkadapters[0]).ipaddresses[0]
	I0507 18:02:30.547582    1744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:02:30.548031    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:02:30.551037    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-809100 ).networkadapters[0]).ipaddresses[0]
	I0507 18:02:31.057951    1744 pod_ready.go:102] pod "coredns-7db6d8ff4d-c4lxf" in "kube-system" namespace has status "Ready":"False"
	I0507 18:02:31.076147    1744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:02:31.076147    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:02:31.076327    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-809100 ).networkadapters[0]).ipaddresses[0]
	I0507 18:02:32.051850    1744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:02:32.051850    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:02:32.051850    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-809100 ).networkadapters[0]).ipaddresses[0]
	I0507 18:02:32.096204    1744 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0507 18:02:32.096204    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-809100 ).state
	I0507 18:02:32.418321    1744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:02:32.418321    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:02:32.418510    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-809100 ).networkadapters[0]).ipaddresses[0]
	I0507 18:02:32.512561    1744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:02:32.512561    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:02:32.512561    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-809100 ).networkadapters[0]).ipaddresses[0]
	I0507 18:02:32.628319    1744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:02:32.628319    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:02:32.628319    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-809100 ).networkadapters[0]).ipaddresses[0]
	I0507 18:02:33.648176    1744 pod_ready.go:92] pod "coredns-7db6d8ff4d-c4lxf" in "kube-system" namespace has status "Ready":"True"
	I0507 18:02:33.648176    1744 pod_ready.go:81] duration metric: took 11.6617507s for pod "coredns-7db6d8ff4d-c4lxf" in "kube-system" namespace to be "Ready" ...
	I0507 18:02:33.649181    1744 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-xqcsn" in "kube-system" namespace to be "Ready" ...
	I0507 18:02:33.933873    1744 pod_ready.go:92] pod "coredns-7db6d8ff4d-xqcsn" in "kube-system" namespace has status "Ready":"True"
	I0507 18:02:33.933873    1744 pod_ready.go:81] duration metric: took 284.672ms for pod "coredns-7db6d8ff4d-xqcsn" in "kube-system" namespace to be "Ready" ...
	I0507 18:02:33.933873    1744 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-809100" in "kube-system" namespace to be "Ready" ...
	I0507 18:02:33.980934    1744 pod_ready.go:92] pod "etcd-addons-809100" in "kube-system" namespace has status "Ready":"True"
	I0507 18:02:33.980934    1744 pod_ready.go:81] duration metric: took 47.0581ms for pod "etcd-addons-809100" in "kube-system" namespace to be "Ready" ...
	I0507 18:02:33.980934    1744 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-809100" in "kube-system" namespace to be "Ready" ...
	I0507 18:02:34.159792    1744 pod_ready.go:92] pod "kube-apiserver-addons-809100" in "kube-system" namespace has status "Ready":"True"
	I0507 18:02:34.159792    1744 pod_ready.go:81] duration metric: took 178.8448ms for pod "kube-apiserver-addons-809100" in "kube-system" namespace to be "Ready" ...
	I0507 18:02:34.159792    1744 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-809100" in "kube-system" namespace to be "Ready" ...
	I0507 18:02:34.196541    1744 pod_ready.go:92] pod "kube-controller-manager-addons-809100" in "kube-system" namespace has status "Ready":"True"
	I0507 18:02:34.196541    1744 pod_ready.go:81] duration metric: took 36.747ms for pod "kube-controller-manager-addons-809100" in "kube-system" namespace to be "Ready" ...
	I0507 18:02:34.196541    1744 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rvknj" in "kube-system" namespace to be "Ready" ...
	I0507 18:02:34.230736    1744 pod_ready.go:92] pod "kube-proxy-rvknj" in "kube-system" namespace has status "Ready":"True"
	I0507 18:02:34.230736    1744 pod_ready.go:81] duration metric: took 34.1926ms for pod "kube-proxy-rvknj" in "kube-system" namespace to be "Ready" ...
	I0507 18:02:34.230736    1744 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-809100" in "kube-system" namespace to be "Ready" ...
	I0507 18:02:34.265008    1744 pod_ready.go:92] pod "kube-scheduler-addons-809100" in "kube-system" namespace has status "Ready":"True"
	I0507 18:02:34.265073    1744 pod_ready.go:81] duration metric: took 34.3346ms for pod "kube-scheduler-addons-809100" in "kube-system" namespace to be "Ready" ...
	I0507 18:02:34.265073    1744 pod_ready.go:38] duration metric: took 12.5258634s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0507 18:02:34.265073    1744 api_server.go:52] waiting for apiserver process to appear ...
	I0507 18:02:34.279578    1744 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0507 18:02:34.352273    1744 api_server.go:72] duration metric: took 15.103433s to wait for apiserver process to appear ...
	I0507 18:02:34.352273    1744 api_server.go:88] waiting for apiserver healthz status ...
	I0507 18:02:34.352273    1744 api_server.go:253] Checking apiserver healthz at https://172.19.135.136:8443/healthz ...
	I0507 18:02:34.403680    1744 api_server.go:279] https://172.19.135.136:8443/healthz returned 200:
	ok
	I0507 18:02:34.438596    1744 api_server.go:141] control plane version: v1.30.0
	I0507 18:02:34.438596    1744 api_server.go:131] duration metric: took 86.3168ms to wait for apiserver health ...
	I0507 18:02:34.438596    1744 system_pods.go:43] waiting for kube-system pods to appear ...
	I0507 18:02:34.476279    1744 system_pods.go:59] 7 kube-system pods found
	I0507 18:02:34.476279    1744 system_pods.go:61] "coredns-7db6d8ff4d-c4lxf" [ae185503-9ddd-4629-b9d1-63dae9c5aae6] Running
	I0507 18:02:34.476279    1744 system_pods.go:61] "coredns-7db6d8ff4d-xqcsn" [71b04d92-c381-476f-a550-785185dc1609] Running
	I0507 18:02:34.476279    1744 system_pods.go:61] "etcd-addons-809100" [f3ba5cb2-f525-40d4-8ad8-c882b9aa0c32] Running
	I0507 18:02:34.476279    1744 system_pods.go:61] "kube-apiserver-addons-809100" [4945b9ef-9a9f-4329-bc38-e24b7f1cf977] Running
	I0507 18:02:34.476279    1744 system_pods.go:61] "kube-controller-manager-addons-809100" [a9a763df-6a9f-44d7-8f72-96c346d8434a] Running
	I0507 18:02:34.476279    1744 system_pods.go:61] "kube-proxy-rvknj" [0b3459a3-1c83-4688-a048-8bd86b799324] Running
	I0507 18:02:34.476279    1744 system_pods.go:61] "kube-scheduler-addons-809100" [5ddcf3f1-56ce-4a34-8350-1a61be59b2f4] Running
	I0507 18:02:34.476279    1744 system_pods.go:74] duration metric: took 37.6805ms to wait for pod list to return data ...
	I0507 18:02:34.476279    1744 default_sa.go:34] waiting for default service account to be created ...
	I0507 18:02:34.497284    1744 default_sa.go:45] found service account: "default"
	I0507 18:02:34.497284    1744 default_sa.go:55] duration metric: took 21.0034ms for default service account to be created ...
	I0507 18:02:34.497284    1744 system_pods.go:116] waiting for k8s-apps to be running ...
	I0507 18:02:34.672834    1744 system_pods.go:86] 7 kube-system pods found
	I0507 18:02:34.672834    1744 system_pods.go:89] "coredns-7db6d8ff4d-c4lxf" [ae185503-9ddd-4629-b9d1-63dae9c5aae6] Running
	I0507 18:02:34.672834    1744 system_pods.go:89] "coredns-7db6d8ff4d-xqcsn" [71b04d92-c381-476f-a550-785185dc1609] Running
	I0507 18:02:34.672834    1744 system_pods.go:89] "etcd-addons-809100" [f3ba5cb2-f525-40d4-8ad8-c882b9aa0c32] Running
	I0507 18:02:34.672834    1744 system_pods.go:89] "kube-apiserver-addons-809100" [4945b9ef-9a9f-4329-bc38-e24b7f1cf977] Running
	I0507 18:02:34.672834    1744 system_pods.go:89] "kube-controller-manager-addons-809100" [a9a763df-6a9f-44d7-8f72-96c346d8434a] Running
	I0507 18:02:34.672834    1744 system_pods.go:89] "kube-proxy-rvknj" [0b3459a3-1c83-4688-a048-8bd86b799324] Running
	I0507 18:02:34.672834    1744 system_pods.go:89] "kube-scheduler-addons-809100" [5ddcf3f1-56ce-4a34-8350-1a61be59b2f4] Running
	I0507 18:02:34.672834    1744 system_pods.go:126] duration metric: took 175.5374ms to wait for k8s-apps to be running ...
	I0507 18:02:34.672834    1744 system_svc.go:44] waiting for kubelet service to be running ....
	I0507 18:02:34.686361    1744 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0507 18:02:34.755359    1744 system_svc.go:56] duration metric: took 82.5199ms WaitForService to wait for kubelet
	I0507 18:02:34.756359    1744 kubeadm.go:576] duration metric: took 15.5074905s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0507 18:02:34.756359    1744 node_conditions.go:102] verifying NodePressure condition ...
	I0507 18:02:34.877195    1744 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0507 18:02:34.877195    1744 node_conditions.go:123] node cpu capacity is 2
	I0507 18:02:34.877195    1744 node_conditions.go:105] duration metric: took 120.828ms to run NodePressure ...
	I0507 18:02:34.877195    1744 start.go:240] waiting for startup goroutines ...
	I0507 18:02:35.559390    1744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:02:35.559390    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:02:35.566785    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-809100 ).networkadapters[0]).ipaddresses[0]
	I0507 18:02:35.598647    1744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:02:35.598647    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:02:35.598647    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-809100 ).networkadapters[0]).ipaddresses[0]
	I0507 18:02:36.022391    1744 main.go:141] libmachine: [stdout =====>] : 172.19.135.136
	
	I0507 18:02:36.022391    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:02:36.022667    1744 sshutil.go:53] new ssh client: &{IP:172.19.135.136 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-809100\id_rsa Username:docker}
	I0507 18:02:36.351449    1744 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0507 18:02:36.351449    1744 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0507 18:02:36.395096    1744 main.go:141] libmachine: [stdout =====>] : 172.19.135.136
	
	I0507 18:02:36.395096    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:02:36.396103    1744 sshutil.go:53] new ssh client: &{IP:172.19.135.136 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-809100\id_rsa Username:docker}
	I0507 18:02:36.496066    1744 main.go:141] libmachine: [stdout =====>] : 172.19.135.136
	
	I0507 18:02:36.496066    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:02:36.496753    1744 sshutil.go:53] new ssh client: &{IP:172.19.135.136 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-809100\id_rsa Username:docker}
	I0507 18:02:36.539406    1744 main.go:141] libmachine: [stdout =====>] : 172.19.135.136
	
	I0507 18:02:36.539406    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:02:36.539406    1744 sshutil.go:53] new ssh client: &{IP:172.19.135.136 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-809100\id_rsa Username:docker}
	I0507 18:02:36.553406    1744 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0507 18:02:36.553406    1744 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0507 18:02:36.739033    1744 main.go:141] libmachine: [stdout =====>] : 172.19.135.136
	
	I0507 18:02:36.739308    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:02:36.739308    1744 sshutil.go:53] new ssh client: &{IP:172.19.135.136 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-809100\id_rsa Username:docker}
	I0507 18:02:36.796251    1744 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0507 18:02:36.797339    1744 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0507 18:02:36.867450    1744 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0507 18:02:36.867450    1744 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0507 18:02:36.870262    1744 main.go:141] libmachine: [stdout =====>] : 172.19.135.136
	
	I0507 18:02:36.870262    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:02:36.871722    1744 sshutil.go:53] new ssh client: &{IP:172.19.135.136 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-809100\id_rsa Username:docker}
	I0507 18:02:36.930738    1744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0507 18:02:36.970838    1744 main.go:141] libmachine: [stdout =====>] : 172.19.135.136
	
	I0507 18:02:36.970838    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:02:36.971165    1744 sshutil.go:53] new ssh client: &{IP:172.19.135.136 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-809100\id_rsa Username:docker}
	I0507 18:02:37.001816    1744 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0507 18:02:37.001816    1744 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0507 18:02:37.110028    1744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:02:37.110086    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:02:37.110086    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-809100 ).networkadapters[0]).ipaddresses[0]
	I0507 18:02:37.141207    1744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0507 18:02:37.235760    1744 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0507 18:02:37.235760    1744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0507 18:02:37.250772    1744 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0507 18:02:37.250772    1744 ssh_runner.go:362] scp helm-tiller/helm-tiller-rbac.yaml --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0507 18:02:37.286812    1744 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0507 18:02:37.286812    1744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0507 18:02:37.394415    1744 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0507 18:02:37.394415    1744 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0507 18:02:37.432266    1744 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0507 18:02:37.432266    1744 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0507 18:02:37.520775    1744 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0507 18:02:37.520775    1744 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0507 18:02:37.529475    1744 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0507 18:02:37.529529    1744 ssh_runner.go:362] scp helm-tiller/helm-tiller-svc.yaml --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0507 18:02:37.588198    1744 main.go:141] libmachine: [stdout =====>] : 172.19.135.136
	
	I0507 18:02:37.588198    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:02:37.589343    1744 sshutil.go:53] new ssh client: &{IP:172.19.135.136 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-809100\id_rsa Username:docker}
	I0507 18:02:37.655861    1744 main.go:141] libmachine: [stdout =====>] : 172.19.135.136
	
	I0507 18:02:37.655861    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:02:37.655861    1744 sshutil.go:53] new ssh client: &{IP:172.19.135.136 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-809100\id_rsa Username:docker}
	I0507 18:02:37.685496    1744 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0507 18:02:37.685496    1744 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0507 18:02:37.754564    1744 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0507 18:02:37.754564    1744 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0507 18:02:37.757168    1744 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0507 18:02:37.757168    1744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0507 18:02:37.780125    1744 main.go:141] libmachine: [stdout =====>] : 172.19.135.136
	
	I0507 18:02:37.780190    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:02:37.780190    1744 sshutil.go:53] new ssh client: &{IP:172.19.135.136 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-809100\id_rsa Username:docker}
	I0507 18:02:37.795673    1744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0507 18:02:37.839886    1744 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0507 18:02:37.839886    1744 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0507 18:02:37.926318    1744 main.go:141] libmachine: [stdout =====>] : 172.19.135.136
	
	I0507 18:02:37.926318    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:02:37.926318    1744 sshutil.go:53] new ssh client: &{IP:172.19.135.136 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-809100\id_rsa Username:docker}
	I0507 18:02:37.930320    1744 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0507 18:02:37.930320    1744 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0507 18:02:37.933311    1744 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0507 18:02:37.933311    1744 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0507 18:02:38.011671    1744 main.go:141] libmachine: [stdout =====>] : 172.19.135.136
	
	I0507 18:02:38.011780    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:02:38.011780    1744 sshutil.go:53] new ssh client: &{IP:172.19.135.136 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-809100\id_rsa Username:docker}
	I0507 18:02:38.044813    1744 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0507 18:02:38.044866    1744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0507 18:02:38.115916    1744 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0507 18:02:38.115968    1744 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0507 18:02:38.141346    1744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0507 18:02:38.224190    1744 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0507 18:02:38.224267    1744 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0507 18:02:38.232991    1744 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0507 18:02:38.232991    1744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0507 18:02:38.238131    1744 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0507 18:02:38.238131    1744 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0507 18:02:38.269575    1744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0507 18:02:38.321978    1744 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0507 18:02:38.321978    1744 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0507 18:02:38.433858    1744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0507 18:02:38.443046    1744 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0507 18:02:38.443046    1744 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0507 18:02:38.471826    1744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0507 18:02:38.474237    1744 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0507 18:02:38.474237    1744 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0507 18:02:38.516259    1744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0507 18:02:38.568886    1744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0507 18:02:38.571831    1744 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0507 18:02:38.571831    1744 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0507 18:02:38.746715    1744 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0507 18:02:38.746715    1744 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0507 18:02:38.755229    1744 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0507 18:02:38.755229    1744 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0507 18:02:38.811560    1744 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0507 18:02:38.811560    1744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0507 18:02:38.828743    1744 main.go:141] libmachine: [stdout =====>] : 172.19.135.136
	
	I0507 18:02:38.828743    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:02:38.828743    1744 sshutil.go:53] new ssh client: &{IP:172.19.135.136 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-809100\id_rsa Username:docker}
	I0507 18:02:38.871233    1744 main.go:141] libmachine: [stdout =====>] : 172.19.135.136
	
	I0507 18:02:38.871233    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:02:38.871759    1744 sshutil.go:53] new ssh client: &{IP:172.19.135.136 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-809100\id_rsa Username:docker}
	I0507 18:02:39.026543    1744 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (2.0956605s)
	I0507 18:02:39.110756    1744 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0507 18:02:39.110756    1744 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0507 18:02:39.163322    1744 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0507 18:02:39.163322    1744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0507 18:02:39.174326    1744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0507 18:02:39.444974    1744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0507 18:02:39.491407    1744 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0507 18:02:39.491407    1744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0507 18:02:39.712205    1744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0507 18:02:39.732971    1744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0507 18:02:39.748286    1744 main.go:141] libmachine: [stdout =====>] : 172.19.135.136
	
	I0507 18:02:39.748336    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:02:39.748387    1744 sshutil.go:53] new ssh client: &{IP:172.19.135.136 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-809100\id_rsa Username:docker}
	I0507 18:02:39.890347    1744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0507 18:02:40.097414    1744 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (2.9560036s)
	I0507 18:02:40.685791    1744 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (2.889918s)
	I0507 18:02:40.914186    1744 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0507 18:02:41.327819    1744 addons.go:234] Setting addon gcp-auth=true in "addons-809100"
	I0507 18:02:41.327819    1744 host.go:66] Checking if "addons-809100" exists ...
	I0507 18:02:41.330015    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-809100 ).state
	I0507 18:02:43.365160    1744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:02:43.365160    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:02:43.374286    1744 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0507 18:02:43.374286    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM addons-809100 ).state
	I0507 18:02:43.694657    1744 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.5528106s)
	I0507 18:02:43.694747    1744 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.4247082s)
	I0507 18:02:43.694747    1744 addons.go:470] Verifying addon metrics-server=true in "addons-809100"
	I0507 18:02:45.012311    1744 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (6.578s)
	I0507 18:02:45.014356    1744 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-809100 service yakd-dashboard -n yakd-dashboard
	
	I0507 18:02:45.472489    1744 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:02:45.472489    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:02:45.472489    1744 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM addons-809100 ).networkadapters[0]).ipaddresses[0]
	I0507 18:02:47.952605    1744 main.go:141] libmachine: [stdout =====>] : 172.19.135.136
	
	I0507 18:02:47.952706    1744 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:02:47.952706    1744 sshutil.go:53] new ssh client: &{IP:172.19.135.136 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\addons-809100\id_rsa Username:docker}
	I0507 18:02:48.373124    1744 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (9.9006161s)
	I0507 18:02:48.373124    1744 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-809100"
	I0507 18:02:48.373124    1744 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.856186s)
	I0507 18:02:48.379251    1744 out.go:177] * Verifying csi-hostpath-driver addon...
	I0507 18:02:48.373687    1744 addons.go:470] Verifying addon ingress=true in "addons-809100"
	I0507 18:02:48.373868    1744 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (9.8043067s)
	I0507 18:02:48.373964    1744 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (9.1990042s)
	I0507 18:02:48.374185    1744 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (8.9285955s)
	I0507 18:02:48.374245    1744 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (8.661443s)
	I0507 18:02:48.374406    1744 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (8.6407954s)
	I0507 18:02:48.374570    1744 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.4836388s)
	I0507 18:02:48.374723    1744 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (5.0000929s)
	I0507 18:02:48.379347    1744 addons.go:470] Verifying addon registry=true in "addons-809100"
	I0507 18:02:48.386592    1744 out.go:177] * Verifying ingress addon...
	W0507 18:02:48.379487    1744 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0507 18:02:48.385640    1744 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0507 18:02:48.390977    1744 retry.go:31] will retry after 325.107363ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0507 18:02:48.392457    1744 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0507 18:02:48.393155    1744 out.go:177] * Verifying registry addon...
	I0507 18:02:48.392565    1744 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.1
	I0507 18:02:48.396514    1744 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0507 18:02:48.399508    1744 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0507 18:02:48.401369    1744 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0507 18:02:48.401369    1744 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0507 18:02:48.439896    1744 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0507 18:02:48.439896    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:02:48.450512    1744 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0507 18:02:48.450542    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:02:48.451328    1744 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0507 18:02:48.451328    1744 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	W0507 18:02:48.457469    1744 out.go:239] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class csi-hostpath-sc as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "csi-hostpath-sc": the object has been modified; please apply your changes to the latest version and try again]
	I0507 18:02:48.471657    1744 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0507 18:02:48.471657    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:02:48.510999    1744 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0507 18:02:48.510999    1744 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0507 18:02:48.616518    1744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0507 18:02:48.735234    1744 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0507 18:02:48.911144    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:02:48.917474    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:02:48.929986    1744 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0507 18:02:48.929986    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:02:49.404891    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:02:49.405994    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:02:49.412285    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:02:49.912357    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:02:49.919036    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:02:49.925214    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:02:50.414628    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:02:50.420366    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:02:50.423782    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:02:50.627551    1744 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.0108135s)
	I0507 18:02:50.637062    1744 addons.go:470] Verifying addon gcp-auth=true in "addons-809100"
	I0507 18:02:50.642504    1744 out.go:177] * Verifying gcp-auth addon...
	I0507 18:02:50.649741    1744 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0507 18:02:50.661093    1744 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0507 18:02:50.661185    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:02:50.910297    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:02:50.914994    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:02:50.920654    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:02:51.137485    1744 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.4020855s)
	I0507 18:02:51.169758    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:02:51.409191    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:02:51.412363    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:02:51.419078    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:02:51.659523    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:02:51.919003    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:02:51.923631    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:02:51.924553    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:02:52.166631    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:02:52.410217    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:02:52.411016    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:02:52.412183    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:02:52.658775    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:02:52.917623    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:02:52.920979    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:02:52.923596    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:02:53.169514    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:02:53.408850    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:02:53.410277    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:02:53.413661    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:02:53.658927    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:02:54.178786    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:02:54.180180    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:02:54.180743    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:02:54.184328    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:02:54.683086    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:02:54.684737    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:02:54.686842    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:02:54.687821    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:02:54.913428    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:02:54.913724    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:02:54.919555    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:02:55.162964    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:02:55.421270    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:02:55.421875    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:02:55.427126    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:02:55.663445    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:02:55.925888    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:02:55.935305    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:02:55.940070    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:02:56.190072    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:02:56.415458    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:02:56.420510    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:02:56.447044    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:02:56.659079    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:02:56.917654    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:02:56.917654    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:02:56.918662    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:02:57.181074    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:02:57.401039    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:02:57.409897    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:02:57.409897    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:02:57.670072    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:02:57.911671    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:02:57.912594    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:02:57.916116    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:02:58.163269    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:02:58.414607    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:02:58.419258    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:02:58.421998    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:02:58.665591    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:02:58.906120    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:02:58.906120    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:02:58.909119    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:02:59.156478    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:02:59.414259    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:02:59.415151    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:02:59.422344    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:02:59.664675    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:02:59.908785    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:02:59.910617    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:02:59.911003    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:00.161133    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:00.416744    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:00.417720    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:00.417720    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:00.667360    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:00.909478    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:00.909679    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:00.910604    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:01.159581    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:01.412670    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:01.413967    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:01.416643    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:01.664884    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:01.907430    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:01.908107    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:01.910483    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:02.157450    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:02.417268    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:02.417439    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:02.417439    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:02.665287    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:02.956318    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:02.958297    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:02.959277    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:03.161461    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:03.415158    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:03.416148    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:03.417144    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:03.665715    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:03.908520    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:03.909069    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:03.912099    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:04.158174    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:04.413749    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:04.414649    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:04.415761    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:04.665358    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:04.907258    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:04.907258    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:04.914963    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:05.160018    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:05.416297    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:05.416297    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:05.418558    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:05.657742    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:05.905656    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:05.905938    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:05.910121    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:06.166533    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:06.403791    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:06.404900    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:06.409699    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:06.669096    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:06.920622    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:06.920622    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:06.922635    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:07.158304    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:07.432752    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:07.433999    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:07.434530    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:07.666496    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:07.912162    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:07.913805    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:07.919938    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:08.159628    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:08.418273    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:08.420873    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:08.425100    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:08.668365    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:08.910806    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:08.911751    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:08.912594    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:09.606874    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:09.607942    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:09.609306    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:09.613819    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:09.868982    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:09.911758    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:09.914084    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:09.915397    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:10.164547    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:10.403640    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:10.404222    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:10.407593    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:11.274797    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:11.275299    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:11.276187    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:11.279047    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:11.279983    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:11.403615    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:11.405896    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:11.409724    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:11.708870    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:12.072076    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:12.072076    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:12.076035    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:12.167586    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:12.407889    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:12.410799    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:12.415616    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:12.662814    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:12.922925    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:12.923058    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:12.926040    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:13.170291    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:13.413421    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:13.418776    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:13.419024    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:13.660037    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:13.915042    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:13.917694    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:13.919595    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:14.168524    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:14.412652    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:14.416986    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:14.420380    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:14.662644    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:14.907508    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:14.907861    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:14.912908    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:15.170015    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:15.408314    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:15.408314    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:15.411134    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:15.660777    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:15.900401    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:15.901397    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:15.917403    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:16.168396    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:16.409463    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:16.411105    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:16.413918    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:16.661486    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:16.907092    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:16.907357    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:16.911352    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:17.169534    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:17.414735    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:17.416435    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:17.419864    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:17.666174    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:17.905611    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:17.907620    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:17.909607    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:18.159971    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:18.419260    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:18.420191    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:18.420191    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:18.668407    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:18.911144    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:18.912130    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:18.912130    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:19.162486    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:19.403268    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:19.412111    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:19.413049    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:19.670750    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:19.913212    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:19.914217    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:19.915214    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:20.165163    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:20.407361    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:20.414550    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:20.415175    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:20.672118    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:20.913865    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:20.916623    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:20.917614    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:21.168377    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:21.413023    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:21.413023    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:21.414026    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:21.662028    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:21.903698    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:21.905071    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:21.907024    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:22.169646    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:22.416913    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:22.418298    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:22.418652    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:22.666213    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:22.907298    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:22.907298    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:22.910284    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:23.160761    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:23.419408    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:23.420884    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:23.423497    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:23.668371    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:23.922694    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:23.923079    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:23.923370    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:24.161168    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:24.418191    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:24.419336    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:24.422925    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:24.669384    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:25.011301    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:25.016773    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:25.019086    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:25.310244    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:25.402927    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:25.402927    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:25.409021    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:25.671614    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:25.914415    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:25.914963    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:25.915039    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:26.162582    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:26.403597    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:26.404608    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:26.408611    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:26.672429    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:26.916961    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:26.917746    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:26.917805    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:27.165486    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:27.407220    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:27.409364    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:27.413748    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:27.658626    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:27.926733    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:27.926813    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:27.926948    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:28.164594    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:28.416034    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:28.416034    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:28.422805    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:28.661735    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:28.912737    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:28.913428    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:28.921251    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:29.160724    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:29.668128    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:29.671251    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:29.672171    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:29.674373    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:29.918813    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:29.919609    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:29.922400    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:30.165493    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:30.407997    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:30.408063    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:30.410597    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:30.673872    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:30.916694    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:30.917071    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:30.922046    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:31.167557    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:31.410629    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:31.418883    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:31.427275    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:31.663960    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:31.907463    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:31.908725    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:31.913881    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:32.157607    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:32.416307    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:32.417282    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:32.421984    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:32.669098    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:32.910427    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:32.911091    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:32.913662    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:33.170536    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:33.414177    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:33.415131    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:33.415131    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:33.666349    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:34.777707    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:34.781107    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:34.781901    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:34.782852    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:34.796707    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:34.816312    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:34.816346    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:34.817003    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:34.902388    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:34.903348    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:34.908981    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:35.171630    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:35.411823    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:35.412440    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:35.419246    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:35.672713    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:35.914680    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:35.920215    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:35.921773    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:36.165128    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:36.407789    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:36.411892    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:36.415737    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:36.673249    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:36.912483    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:36.913734    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:36.914747    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:37.161959    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:37.418298    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:37.418672    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:37.418672    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:37.664240    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:37.907634    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:37.910751    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:37.915211    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:38.174730    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:38.412966    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:38.417593    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:38.419930    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:38.666265    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:38.913088    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:38.913780    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:38.919229    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:39.170348    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:39.812978    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:39.814909    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:39.814909    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:39.817016    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:39.906307    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:39.910335    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:39.919736    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:40.173987    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:40.422130    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:40.426833    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:40.428041    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:40.669952    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:40.908723    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:40.925620    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:40.931715    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:41.167527    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:41.429498    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:41.433112    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:41.438843    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:41.658847    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:41.910543    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:41.911540    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:41.914538    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:42.162282    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:42.404239    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:42.406218    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:42.418358    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:42.670103    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:42.913553    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:42.913553    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:42.914547    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:43.164441    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:43.407293    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:43.408007    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:43.411383    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:43.672158    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:43.916953    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:43.917597    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:43.918323    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:44.167139    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:44.410080    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:44.416245    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:44.418206    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:44.660574    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:44.922354    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:44.922953    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:44.923826    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:45.171631    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:45.416284    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:45.419022    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:45.422463    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:45.664329    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:45.907416    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:45.909007    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:45.916056    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:46.158393    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:46.415247    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:46.415488    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:46.417612    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:46.664771    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:46.928637    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:46.928637    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:46.929484    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:47.170683    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:47.411297    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:47.415307    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:47.415307    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:47.773301    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:47.912730    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:47.912795    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:47.918061    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:48.159444    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:48.417786    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:48.418797    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:48.421765    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:48.667961    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:48.908572    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:48.912145    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:48.915670    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:49.160743    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:49.421861    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:49.428046    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:49.429633    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:49.669802    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:49.911871    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:49.915840    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:49.915840    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:50.257745    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:50.413011    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:50.414277    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:50.416289    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:50.660878    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:50.919140    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:50.919715    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:50.919918    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:51.161355    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:51.828552    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:51.828604    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:51.835415    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:51.838008    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:51.922897    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:51.931699    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:51.935159    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:52.171112    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:52.406433    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:52.410912    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:52.412753    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:52.660728    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:52.913573    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:52.914612    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:52.915896    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:53.163306    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:53.418945    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:53.420752    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:53.426554    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:53.669940    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:53.916613    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:53.916613    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:53.917195    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:54.163465    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:54.582438    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:54.582438    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:54.588651    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:54.664131    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:54.920795    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0507 18:03:54.921108    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:54.923606    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:55.171219    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:55.408264    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:55.409207    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:55.413052    1744 kapi.go:107] duration metric: took 1m7.0119296s to wait for kubernetes.io/minikube-addons=registry ...
	I0507 18:03:55.673466    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:55.914201    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:55.915884    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:56.164403    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:56.423536    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:56.424236    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:56.670494    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:56.910449    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:56.911093    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:57.162449    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:57.419062    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:57.420064    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:57.667570    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:57.905162    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:57.905162    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:58.160503    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:58.419190    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:58.419569    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:58.671451    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:58.911903    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:58.912379    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:59.167838    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:59.409325    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:59.409325    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:03:59.661905    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:03:59.914741    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:03:59.916789    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:00.166865    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:00.405159    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:00.408390    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:00.669018    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:00.910628    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:00.911981    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:01.160814    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:01.420954    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:01.422015    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:01.671739    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:01.913501    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:01.918525    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:02.166170    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:02.418203    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:02.421183    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:03.331109    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:03.331931    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:03.333995    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:03.336787    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:03.824905    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:03.826335    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:03.826386    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:04.174165    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:04.174503    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:04.179458    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:04.405499    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:04.405499    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:04.674500    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:04.912133    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:04.913285    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:05.162090    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:05.418336    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:05.420176    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:05.676457    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:05.909302    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:05.914450    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:06.159838    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:06.419862    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:06.420861    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:06.674448    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:06.921058    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:06.924741    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:07.168442    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:07.410481    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:07.410968    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:07.660202    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:07.918483    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:07.921725    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:08.169792    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:08.408138    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:08.410932    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:08.675140    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:08.917958    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:08.917958    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:09.167972    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:09.728214    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:09.729825    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:09.733340    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:09.919627    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:09.922196    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:10.259286    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:10.407167    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:10.408538    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:10.673715    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:10.909823    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:10.909878    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:11.164265    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:11.419179    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:11.419257    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:11.670508    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:11.910665    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:11.913393    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:12.160259    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:12.417273    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:12.418753    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:12.669832    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:12.912334    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:12.913351    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:13.163896    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:13.421521    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:13.421521    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:13.672765    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:13.963054    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:13.966390    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:14.316204    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:14.416382    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:14.418572    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:14.678363    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:14.916020    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:14.918495    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:15.164587    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:15.416489    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:15.416645    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:15.662336    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:15.917205    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:15.919286    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:16.169836    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:16.422266    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:16.422419    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:16.670243    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:16.925797    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:16.928153    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:17.161087    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:17.418035    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:17.420511    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:17.667883    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:17.911188    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:17.913969    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:18.163969    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:18.417479    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:18.420971    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:18.666633    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:18.908410    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:18.909109    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:19.175166    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:19.416447    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:19.422557    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:19.664743    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:19.918651    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:19.921496    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:20.165902    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:20.422091    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:20.423057    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:20.667732    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:20.920823    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:20.921830    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:21.176396    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:21.418051    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:21.419041    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:21.670334    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:21.913602    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:21.916743    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:22.162629    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:22.412491    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:22.415350    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:22.664573    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:22.925649    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:22.926192    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:23.171898    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:23.413823    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:23.414488    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:23.666017    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:24.430458    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:24.430968    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:24.431118    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:24.483109    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:24.494781    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:24.666767    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:24.909647    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:24.913400    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:25.161658    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:25.420634    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:25.420942    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:25.666859    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:25.916006    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:25.917753    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:26.165359    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:26.419804    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:26.419970    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:26.664793    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:26.918085    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:26.922184    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:27.169820    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:27.411964    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:27.412981    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:27.661852    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:27.918114    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:27.918598    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:28.170679    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:28.873469    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:28.875660    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:28.879189    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:28.914617    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:28.914876    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:29.165318    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:29.419917    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:29.421461    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:29.667996    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:29.909440    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:29.910149    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:30.177208    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:30.422810    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:30.422810    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:30.670906    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:30.926174    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:30.931482    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:31.161703    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:31.422340    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:31.423303    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:31.667166    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:31.918850    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:31.919071    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:32.169152    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:32.419859    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:32.422616    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:32.672754    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:32.906718    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:32.907823    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:33.175412    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:33.416951    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:33.418230    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:33.667288    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:33.929703    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:33.943798    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:34.176268    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:34.414803    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:34.424670    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:34.779261    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:34.924592    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:34.924592    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:35.175185    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:35.411213    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:35.412498    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:35.677549    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:35.918605    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:35.919598    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:36.171101    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:36.413587    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:36.413587    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:36.663726    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:36.913832    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:36.919057    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:37.773976    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:37.773976    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:37.773976    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:37.778617    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:37.913348    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:37.913950    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:38.163660    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:38.433407    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:38.434561    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:38.673277    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:38.916558    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:38.916558    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:39.168899    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:39.422499    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:39.422767    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:39.675753    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:39.927362    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:39.927528    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:40.167978    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:40.408030    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:40.411957    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:40.676019    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:40.916525    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:40.917384    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:41.168264    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:41.433243    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:41.435802    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:41.673160    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:41.911601    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:41.914483    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:42.174475    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:42.421796    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:42.426284    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:42.672040    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:42.912497    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:42.915755    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:43.165237    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:43.419798    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:43.421810    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:43.673687    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:43.916410    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:43.916474    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:44.167964    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:44.519497    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:44.527274    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:44.669506    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:44.911830    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:44.913922    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:45.164551    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:45.423353    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:45.423499    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:45.667993    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:45.922045    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:45.922045    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:46.182487    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:46.541601    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:46.545818    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:46.676683    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:46.918408    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:46.921090    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:47.170374    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:47.423247    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:47.424749    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:47.670943    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:47.921256    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:47.924461    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:48.166854    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:48.416489    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:48.419410    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:48.665965    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:48.920408    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:48.922488    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:49.166975    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:49.420748    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:49.420870    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:49.666764    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:49.921403    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:49.922573    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:50.166771    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:50.421395    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:50.423193    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:50.666985    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:50.921777    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:50.925120    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:51.173955    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:51.415227    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:51.418447    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:51.663140    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:51.922472    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:51.922770    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:52.173413    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:52.729861    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:52.730609    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:52.732872    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:52.914917    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:52.917277    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:53.178034    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:53.419267    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:53.421453    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:53.668793    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:53.921298    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:53.922022    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:54.169667    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:54.411057    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:54.412527    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:54.677540    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:54.917944    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:54.921536    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:55.169938    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:55.409562    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:55.411553    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:55.664135    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:55.919793    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:55.923144    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:56.165525    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:56.417380    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:56.420575    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0507 18:04:56.673859    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:56.913370    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:56.916575    1744 kapi.go:107] duration metric: took 2m8.5221169s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0507 18:04:57.163399    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:57.426179    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:57.669898    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:57.922417    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:58.170212    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:58.424002    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:58.672074    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:58.939050    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:59.176035    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:59.410896    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:04:59.664396    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:04:59.912419    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:05:00.173724    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:05:00.422597    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:05:00.670779    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:05:00.921770    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:05:01.168465    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:05:01.423373    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:05:01.671762    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:05:01.923527    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:05:02.172564    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:05:02.423685    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:05:02.670758    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:05:02.921404    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:05:03.175377    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:05:03.414964    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:05:03.667210    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:05:03.919415    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:05:04.171065    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:05:04.423149    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:05:04.673739    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:05:04.913873    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:05:05.164520    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:05:05.431523    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:05:05.672260    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:05:05.922877    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:05:06.172231    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:05:06.409404    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:05:06.681394    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:05:06.916967    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:05:07.168841    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:05:07.421673    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:05:07.674031    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:05:08.322990    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:05:08.323692    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:05:08.420963    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:05:08.674855    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:05:09.014306    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:05:09.167533    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:05:09.623465    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:05:09.676469    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:05:09.916040    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:05:10.169352    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:05:10.423578    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:05:10.676740    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:05:10.923325    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:05:11.170340    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:05:11.423158    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:05:11.676131    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:05:11.920191    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:05:12.170411    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:05:12.425946    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:05:12.678538    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:05:12.916877    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:05:13.170974    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:05:13.412310    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:05:13.678381    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:05:13.914319    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:05:14.166227    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:05:14.422692    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:05:14.670395    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:05:14.910184    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:05:15.178026    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:05:15.415952    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:05:15.718191    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:05:15.916858    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:05:16.168548    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:05:16.424565    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:05:16.676299    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:05:16.913340    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:05:17.166097    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:05:17.422625    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:05:18.629112    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:05:18.633584    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:05:18.644155    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:05:18.644614    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:05:19.019871    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:05:19.347408    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:05:19.348920    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:05:19.424374    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:05:19.680188    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:05:19.918244    1744 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0507 18:05:20.170095    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:05:20.414488    1744 kapi.go:107] duration metric: took 2m32.0116113s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0507 18:05:20.665627    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:05:21.174784    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:05:21.982136    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:05:22.171931    1744 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0507 18:05:22.679532    1744 kapi.go:107] duration metric: took 2m32.0193727s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0507 18:05:22.682731    1744 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-809100 cluster.
	I0507 18:05:22.685389    1744 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0507 18:05:22.687745    1744 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0507 18:05:22.690567    1744 out.go:177] * Enabled addons: cloud-spanner, ingress-dns, helm-tiller, metrics-server, nvidia-device-plugin, yakd, inspektor-gadget, storage-provisioner, storage-provisioner-rancher, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0507 18:05:22.694526    1744 addons.go:505] duration metric: took 3m3.434352s for enable addons: enabled=[cloud-spanner ingress-dns helm-tiller metrics-server nvidia-device-plugin yakd inspektor-gadget storage-provisioner storage-provisioner-rancher volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0507 18:05:22.694526    1744 start.go:245] waiting for cluster config update ...
	I0507 18:05:22.694526    1744 start.go:254] writing updated cluster config ...
	I0507 18:05:22.704477    1744 ssh_runner.go:195] Run: rm -f paused
	I0507 18:05:22.927259    1744 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0507 18:05:22.936072    1744 out.go:177] * Done! kubectl is now configured to use "addons-809100" cluster and "default" namespace by default
	
	
	==> Docker <==
	May 07 18:06:27 addons-809100 dockerd[1326]: time="2024-05-07T18:06:27.439587466Z" level=info msg="ignoring event" container=00e64618c2e1b7aa82adcdaac499555119cc9003fa977891d75ad32f5583e307 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 07 18:06:27 addons-809100 dockerd[1332]: time="2024-05-07T18:06:27.441179332Z" level=info msg="shim disconnected" id=00e64618c2e1b7aa82adcdaac499555119cc9003fa977891d75ad32f5583e307 namespace=moby
	May 07 18:06:27 addons-809100 dockerd[1332]: time="2024-05-07T18:06:27.441274042Z" level=warning msg="cleaning up after shim disconnected" id=00e64618c2e1b7aa82adcdaac499555119cc9003fa977891d75ad32f5583e307 namespace=moby
	May 07 18:06:27 addons-809100 dockerd[1332]: time="2024-05-07T18:06:27.441341449Z" level=info msg="cleaning up dead shim" namespace=moby
	May 07 18:06:27 addons-809100 dockerd[1326]: time="2024-05-07T18:06:27.615135015Z" level=info msg="ignoring event" container=8d7b1ec4288b4795e42729f047d51140b6ecfdc20b8b2ae04bfb7059a5a533ef module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 07 18:06:27 addons-809100 dockerd[1332]: time="2024-05-07T18:06:27.616706678Z" level=info msg="shim disconnected" id=8d7b1ec4288b4795e42729f047d51140b6ecfdc20b8b2ae04bfb7059a5a533ef namespace=moby
	May 07 18:06:27 addons-809100 dockerd[1332]: time="2024-05-07T18:06:27.616831691Z" level=warning msg="cleaning up after shim disconnected" id=8d7b1ec4288b4795e42729f047d51140b6ecfdc20b8b2ae04bfb7059a5a533ef namespace=moby
	May 07 18:06:27 addons-809100 dockerd[1332]: time="2024-05-07T18:06:27.616848193Z" level=info msg="cleaning up dead shim" namespace=moby
	May 07 18:06:32 addons-809100 dockerd[1326]: time="2024-05-07T18:06:32.088534107Z" level=info msg="Container failed to exit within 30s of signal 15 - using the force" container=575f440a16353dd790dc07abf6471e9519d9aa31aafd3470585954e28f509750 spanID=06038ca8238ee1b1 traceID=cfc5acaecb8b97d16e3e9b653f8619d0
	May 07 18:06:32 addons-809100 dockerd[1326]: time="2024-05-07T18:06:32.132199255Z" level=info msg="ignoring event" container=575f440a16353dd790dc07abf6471e9519d9aa31aafd3470585954e28f509750 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 07 18:06:32 addons-809100 dockerd[1332]: time="2024-05-07T18:06:32.133815227Z" level=info msg="shim disconnected" id=575f440a16353dd790dc07abf6471e9519d9aa31aafd3470585954e28f509750 namespace=moby
	May 07 18:06:32 addons-809100 dockerd[1332]: time="2024-05-07T18:06:32.133881134Z" level=warning msg="cleaning up after shim disconnected" id=575f440a16353dd790dc07abf6471e9519d9aa31aafd3470585954e28f509750 namespace=moby
	May 07 18:06:32 addons-809100 dockerd[1332]: time="2024-05-07T18:06:32.133892135Z" level=info msg="cleaning up dead shim" namespace=moby
	May 07 18:06:32 addons-809100 dockerd[1326]: time="2024-05-07T18:06:32.347234345Z" level=info msg="ignoring event" container=3f52b50f40f9e4c408b7dccbd43d894bd83f152b355a12d12839c0e6b2b79d42 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 07 18:06:32 addons-809100 dockerd[1332]: time="2024-05-07T18:06:32.348061433Z" level=info msg="shim disconnected" id=3f52b50f40f9e4c408b7dccbd43d894bd83f152b355a12d12839c0e6b2b79d42 namespace=moby
	May 07 18:06:32 addons-809100 dockerd[1332]: time="2024-05-07T18:06:32.348973130Z" level=warning msg="cleaning up after shim disconnected" id=3f52b50f40f9e4c408b7dccbd43d894bd83f152b355a12d12839c0e6b2b79d42 namespace=moby
	May 07 18:06:32 addons-809100 dockerd[1332]: time="2024-05-07T18:06:32.349161650Z" level=info msg="cleaning up dead shim" namespace=moby
	May 07 18:06:35 addons-809100 dockerd[1326]: time="2024-05-07T18:06:35.677330897Z" level=info msg="ignoring event" container=4add077f5d7305497d8095407ae32f10994e50a4a30641e288be2e6a5f472041 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 07 18:06:35 addons-809100 dockerd[1332]: time="2024-05-07T18:06:35.677761543Z" level=info msg="shim disconnected" id=4add077f5d7305497d8095407ae32f10994e50a4a30641e288be2e6a5f472041 namespace=moby
	May 07 18:06:35 addons-809100 dockerd[1332]: time="2024-05-07T18:06:35.677809848Z" level=warning msg="cleaning up after shim disconnected" id=4add077f5d7305497d8095407ae32f10994e50a4a30641e288be2e6a5f472041 namespace=moby
	May 07 18:06:35 addons-809100 dockerd[1332]: time="2024-05-07T18:06:35.677819349Z" level=info msg="cleaning up dead shim" namespace=moby
	May 07 18:06:35 addons-809100 dockerd[1326]: time="2024-05-07T18:06:35.845341600Z" level=info msg="ignoring event" container=8bcf01ef37473b19805973cd575f06a33f1db6f4cc18af9c73eea8d6c36be363 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	May 07 18:06:35 addons-809100 dockerd[1332]: time="2024-05-07T18:06:35.847560036Z" level=info msg="shim disconnected" id=8bcf01ef37473b19805973cd575f06a33f1db6f4cc18af9c73eea8d6c36be363 namespace=moby
	May 07 18:06:35 addons-809100 dockerd[1332]: time="2024-05-07T18:06:35.848246009Z" level=warning msg="cleaning up after shim disconnected" id=8bcf01ef37473b19805973cd575f06a33f1db6f4cc18af9c73eea8d6c36be363 namespace=moby
	May 07 18:06:35 addons-809100 dockerd[1332]: time="2024-05-07T18:06:35.848547342Z" level=info msg="cleaning up dead shim" namespace=moby
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD
	9bd07d27b2f13       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:abef4926f3e6f0aa50c968aa954f990a6b0178e04a955293a49d96810c43d0e1                            23 seconds ago       Exited              gadget                                   4                   7888e4f1b1cf5       gadget-kzdfd
	d940fa2a879dc       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:e6c5b3bc32072ea370d34c27836efd11b3519d25bd444c2a8efc339cff0e20fb                                 About a minute ago   Running             gcp-auth                                 0                   83dfee703cd2b       gcp-auth-5db96cd9b4-v7sfz
	27668f0f8aea3       registry.k8s.io/ingress-nginx/controller@sha256:e24f39d3eed6bcc239a56f20098878845f62baa34b9f2be2fd2c38ce9fb0f29e                             About a minute ago   Running             controller                               0                   abed4beb3dc63       ingress-nginx-controller-768f948f8f-mpnml
	6da12cddc6faf       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          About a minute ago   Running             csi-snapshotter                          0                   58927f8e06b5f       csi-hostpathplugin-prqtm
	ae0415d8d8341       registry.k8s.io/sig-storage/csi-provisioner@sha256:ee3b525d5b89db99da3b8eb521d9cd90cb6e9ef0fbb651e98bb37be78d36b5b8                          About a minute ago   Running             csi-provisioner                          0                   58927f8e06b5f       csi-hostpathplugin-prqtm
	3a08e8b616097       registry.k8s.io/sig-storage/livenessprobe@sha256:cacee2b5c36dd59d4c7e8469c05c9e4ef53ecb2df9025fa8c10cdaf61bce62f0                            About a minute ago   Running             liveness-probe                           0                   58927f8e06b5f       csi-hostpathplugin-prqtm
	9339c99c1f87d       registry.k8s.io/sig-storage/hostpathplugin@sha256:92257881c1d6493cf18299a24af42330f891166560047902b8d431fb66b01af5                           About a minute ago   Running             hostpath                                 0                   58927f8e06b5f       csi-hostpathplugin-prqtm
	4513d64f7c2df       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:f1c25991bac2fbb7f5fcf91ed9438df31e30edee6bed5a780464238aa09ad24c                About a minute ago   Running             node-driver-registrar                    0                   58927f8e06b5f       csi-hostpathplugin-prqtm
	474f5d67342c7       684c5ea3b61b2                                                                                                                                About a minute ago   Exited              patch                                    2                   de53595cd39c1       ingress-nginx-admission-patch-bfvb8
	193374c74593c       registry.k8s.io/sig-storage/csi-resizer@sha256:425d8f1b769398127767b06ed97ce62578a3179bcb99809ce93a1649e025ffe7                              About a minute ago   Running             csi-resizer                              0                   ca5abe009a332       csi-hostpath-resizer-0
	897ccfa359bcf       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:80b9ba94aa2afe24553d69bd165a6a51552d1582d68618ec00d3b804a7d9193c   2 minutes ago        Running             csi-external-health-monitor-controller   0                   58927f8e06b5f       csi-hostpathplugin-prqtm
	635f88049c555       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:36d05b4077fb8e3d13663702fa337f124675ba8667cbd949c03a8e8ea6fa4366                   2 minutes ago        Exited              create                                   0                   55d40f03472f1       ingress-nginx-admission-create-g6g5z
	74961a8bea7f4       registry.k8s.io/sig-storage/csi-attacher@sha256:9a685020911e2725ad019dbce6e4a5ab93d51e3d4557f115e64343345e05781b                             2 minutes ago        Running             csi-attacher                             0                   a698ca135ea1d       csi-hostpath-attacher-0
	d29a2564be807       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      2 minutes ago        Running             volume-snapshot-controller               0                   f04a686f7c9a5       snapshot-controller-745499f584-ps2td
	67775952017f6       registry.k8s.io/sig-storage/snapshot-controller@sha256:823c75d0c45d1427f6d850070956d9ca657140a7bbf828381541d1d808475280                      2 minutes ago        Running             volume-snapshot-controller               0                   7f4c525ffeb74       snapshot-controller-745499f584-vm42t
	966db1cc6f6b2       marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                                                        2 minutes ago        Running             yakd                                     0                   ab3b8cdab4299       yakd-dashboard-5ddbf7d777-fdlqx
	470c3688faa7b       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f                             3 minutes ago        Running             minikube-ingress-dns                     0                   82dc99786dcab       kube-ingress-dns-minikube
	392cb4981a597       gcr.io/cloud-spanner-emulator/emulator@sha256:6a72be4b6978a014035656e130840ad1bc06c8aa7c4de78871464ad5714565d4                               3 minutes ago        Running             cloud-spanner-emulator                   0                   4bf379a42d279       cloud-spanner-emulator-6fcd4f6f98-dv82b
	99d74d5bfcd6b       6e38f40d628db                                                                                                                                3 minutes ago        Running             storage-provisioner                      0                   c59ad3d5122dd       storage-provisioner
	67bb6ea3c0f35       cbb01a7bd410d                                                                                                                                4 minutes ago        Running             coredns                                  0                   a07cfc6d742ef       coredns-7db6d8ff4d-c4lxf
	565b1db151c44       a0bf559e280cf                                                                                                                                4 minutes ago        Running             kube-proxy                               0                   cdfafd0cd4c06       kube-proxy-rvknj
	0c2fdf9034fb3       c7aad43836fa5                                                                                                                                4 minutes ago        Running             kube-controller-manager                  0                   17f71151c3217       kube-controller-manager-addons-809100
	b4eb33bb71590       259c8277fcbbc                                                                                                                                4 minutes ago        Running             kube-scheduler                           0                   d57d57af19b83       kube-scheduler-addons-809100
	0f3f2ce7c6fa2       c42f13656d0b2                                                                                                                                4 minutes ago        Running             kube-apiserver                           0                   7b9b4a7344546       kube-apiserver-addons-809100
	b6049088f85bb       3861cfcd7c04c                                                                                                                                4 minutes ago        Running             etcd                                     0                   36247afa9770c       etcd-addons-809100
	
	
	==> controller_ingress [27668f0f8aea] <==
	I0507 18:05:19.826756       7 main.go:248] "Running in Kubernetes cluster" major="1" minor="30" git="v1.30.0" state="clean" commit="7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a" platform="linux/amd64"
	I0507 18:05:19.995552       7 main.go:101] "SSL fake certificate created" file="/etc/ingress-controller/ssl/default-fake-certificate.pem"
	I0507 18:05:20.018962       7 ssl.go:535] "loading tls certificate" path="/usr/local/certificates/cert" key="/usr/local/certificates/key"
	I0507 18:05:20.034873       7 nginx.go:264] "Starting NGINX Ingress controller"
	I0507 18:05:20.050852       7 event.go:364] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"5b8d4831-2eed-4fdf-be94-445b6f80a74b", APIVersion:"v1", ResourceVersion:"684", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/ingress-nginx-controller
	I0507 18:05:20.056911       7 event.go:364] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"tcp-services", UID:"8e627f88-3a38-4811-a9e3-ac6072bc757f", APIVersion:"v1", ResourceVersion:"686", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/tcp-services
	I0507 18:05:20.056957       7 event.go:364] Event(v1.ObjectReference{Kind:"ConfigMap", Namespace:"ingress-nginx", Name:"udp-services", UID:"13d8bd93-2034-4bc3-9258-b72ece914d7d", APIVersion:"v1", ResourceVersion:"688", FieldPath:""}): type: 'Normal' reason: 'CREATE' ConfigMap ingress-nginx/udp-services
	I0507 18:05:21.238715       7 nginx.go:307] "Starting NGINX process"
	I0507 18:05:21.239220       7 leaderelection.go:250] attempting to acquire leader lease ingress-nginx/ingress-nginx-leader...
	I0507 18:05:21.239480       7 nginx.go:327] "Starting validation webhook" address=":8443" certPath="/usr/local/certificates/cert" keyPath="/usr/local/certificates/key"
	I0507 18:05:21.239811       7 controller.go:190] "Configuration changes detected, backend reload required"
	I0507 18:05:21.261727       7 leaderelection.go:260] successfully acquired lease ingress-nginx/ingress-nginx-leader
	I0507 18:05:21.262637       7 status.go:84] "New leader elected" identity="ingress-nginx-controller-768f948f8f-mpnml"
	I0507 18:05:21.269050       7 status.go:219] "POD is not ready" pod="ingress-nginx/ingress-nginx-controller-768f948f8f-mpnml" node="addons-809100"
	I0507 18:05:21.318163       7 controller.go:210] "Backend successfully reloaded"
	I0507 18:05:21.318544       7 controller.go:221] "Initial sync, sleeping for 1 second"
	I0507 18:05:21.318870       7 event.go:364] Event(v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-768f948f8f-mpnml", UID:"6e0152b5-7d5e-4e50-9010-763a838d8f4b", APIVersion:"v1", ResourceVersion:"731", FieldPath:""}): type: 'Normal' reason: 'RELOAD' NGINX reload triggered due to a change in configuration
	NGINX Ingress controller
	  Release:       v1.10.1
	  Build:         4fb5aac1dd3669daa3a14d9de3e3cdb371b4c518
	  Repository:    https://github.com/kubernetes/ingress-nginx
	  nginx version: nginx/1.25.3
	
	-------------------------------------------------------------------------------
	
	
	
	==> coredns [67bb6ea3c0f3] <==
	[INFO] 10.244.0.22:37862 - 58643 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000201323s
	[INFO] 10.244.0.22:52926 - 42129 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000149217s
	[INFO] 10.244.0.22:40218 - 32588 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000066007s
	[INFO] 10.244.0.22:60400 - 13865 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000140416s
	[INFO] 10.244.0.22:41629 - 9665 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000156018s
	[INFO] 10.244.0.22:52570 - 48785 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd 240 0.002800518s
	[INFO] 10.244.0.22:33996 - 42992 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd 192 0.003003642s
	[INFO] 10.244.0.25:50002 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000340137s
	[INFO] 10.244.0.25:56312 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000262329s
	[INFO] 10.244.0.8:54983 - 42078 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000289231s
	[INFO] 10.244.0.8:54983 - 33112 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000248227s
	[INFO] 10.244.0.8:43908 - 10517 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000213423s
	[INFO] 10.244.0.8:43908 - 51566 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000237626s
	[INFO] 10.244.0.8:47686 - 28552 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000191021s
	[INFO] 10.244.0.8:47686 - 20101 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000136115s
	[INFO] 10.244.0.8:39851 - 21412 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000085109s
	[INFO] 10.244.0.8:39851 - 33696 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000149716s
	[INFO] 10.244.0.8:54135 - 36460 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000084509s
	[INFO] 10.244.0.8:54135 - 3945 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000490654s
	[INFO] 10.244.0.8:50086 - 31617 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000106311s
	[INFO] 10.244.0.8:50086 - 6274 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000127714s
	[INFO] 10.244.0.8:47987 - 5342 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000110613s
	[INFO] 10.244.0.8:47987 - 14803 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000119013s
	[INFO] 10.244.0.8:41339 - 41054 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000062406s
	[INFO] 10.244.0.8:41339 - 32348 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00018502s
	
	
	==> describe nodes <==
	Name:               addons-809100
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-809100
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a2bee053733709aad5480b65159f65519e411d9f
	                    minikube.k8s.io/name=addons-809100
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_07T18_02_05_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-809100
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-809100"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 07 May 2024 18:02:02 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-809100
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 07 May 2024 18:06:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 07 May 2024 18:06:11 +0000   Tue, 07 May 2024 18:02:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 07 May 2024 18:06:11 +0000   Tue, 07 May 2024 18:02:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 07 May 2024 18:06:11 +0000   Tue, 07 May 2024 18:02:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 07 May 2024 18:06:11 +0000   Tue, 07 May 2024 18:02:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.135.136
	  Hostname:    addons-809100
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	System Info:
	  Machine ID:                 cd50b0fdd83345919be45d76d193f695
	  System UUID:                e4f7bb52-4b96-534d-87ef-6dd325f9fa9e
	  Boot ID:                    45dbd9b3-78ae-493c-a7dc-41ec20a8b5b8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (18 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     cloud-spanner-emulator-6fcd4f6f98-dv82b      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m57s
	  gadget                      gadget-kzdfd                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m51s
	  gcp-auth                    gcp-auth-5db96cd9b4-v7sfz                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m46s
	  ingress-nginx               ingress-nginx-controller-768f948f8f-mpnml    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (2%!)(MISSING)        0 (0%!)(MISSING)         3m49s
	  kube-system                 coredns-7db6d8ff4d-c4lxf                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m17s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m48s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m48s
	  kube-system                 csi-hostpathplugin-prqtm                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m48s
	  kube-system                 etcd-addons-809100                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m31s
	  kube-system                 kube-apiserver-addons-809100                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m31s
	  kube-system                 kube-controller-manager-addons-809100        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m32s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m56s
	  kube-system                 kube-proxy-rvknj                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m18s
	  kube-system                 kube-scheduler-addons-809100                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m33s
	  kube-system                 snapshot-controller-745499f584-ps2td         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m50s
	  kube-system                 snapshot-controller-745499f584-vm42t         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m50s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m53s
	  yakd-dashboard              yakd-dashboard-5ddbf7d777-fdlqx              0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     3m51s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             388Mi (10%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m9s                   kube-proxy       
	  Normal  Starting                 4m32s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m31s (x2 over 4m31s)  kubelet          Node addons-809100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m31s (x2 over 4m31s)  kubelet          Node addons-809100 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m31s (x2 over 4m31s)  kubelet          Node addons-809100 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m28s                  kubelet          Node addons-809100 status is now: NodeReady
	  Normal  RegisteredNode           4m18s                  node-controller  Node addons-809100 event: Registered Node addons-809100 in Controller
	
	
	==> dmesg <==
	[  +0.515239] kauditd_printk_skb: 12 callbacks suppressed
	[  +5.150011] kauditd_printk_skb: 22 callbacks suppressed
	[  +6.879469] kauditd_printk_skb: 54 callbacks suppressed
	[ +10.177635] kauditd_printk_skb: 41 callbacks suppressed
	[  +5.004461] kauditd_printk_skb: 52 callbacks suppressed
	[  +7.354397] kauditd_printk_skb: 149 callbacks suppressed
	[May 7 18:04] kauditd_printk_skb: 2 callbacks suppressed
	[ +20.256057] kauditd_printk_skb: 31 callbacks suppressed
	[  +6.596378] kauditd_printk_skb: 12 callbacks suppressed
	[  +6.692054] kauditd_printk_skb: 13 callbacks suppressed
	[  +6.696142] kauditd_printk_skb: 73 callbacks suppressed
	[  +8.394833] kauditd_printk_skb: 2 callbacks suppressed
	[May 7 18:05] kauditd_printk_skb: 34 callbacks suppressed
	[  +6.772926] kauditd_printk_skb: 10 callbacks suppressed
	[  +9.654845] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.003183] kauditd_printk_skb: 15 callbacks suppressed
	[  +5.834574] kauditd_printk_skb: 6 callbacks suppressed
	[  +5.097587] kauditd_printk_skb: 64 callbacks suppressed
	[  +7.940263] kauditd_printk_skb: 16 callbacks suppressed
	[  +5.919838] kauditd_printk_skb: 22 callbacks suppressed
	[  +5.968678] kauditd_printk_skb: 14 callbacks suppressed
	[May 7 18:06] kauditd_printk_skb: 11 callbacks suppressed
	[  +7.033021] kauditd_printk_skb: 36 callbacks suppressed
	[  +9.721558] kauditd_printk_skb: 31 callbacks suppressed
	[  +5.009167] kauditd_printk_skb: 5 callbacks suppressed
	
	
	==> etcd [b6049088f85b] <==
	{"level":"warn","ts":"2024-05-07T18:05:46.322171Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-07T18:05:45.81992Z","time spent":"502.248032ms","remote":"127.0.0.1:55794","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-05-07T18:05:46.323886Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"221.522346ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:3 size:10402"}
	{"level":"info","ts":"2024-05-07T18:05:46.323926Z","caller":"traceutil/trace.go:171","msg":"trace[1556390191] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:3; response_revision:1406; }","duration":"221.581153ms","start":"2024-05-07T18:05:46.102333Z","end":"2024-05-07T18:05:46.323915Z","steps":["trace[1556390191] 'agreement among raft nodes before linearized reading'  (duration: 221.45844ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-07T18:05:46.324204Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"238.801899ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io\" ","response":"range_response_count:1 size:553"}
	{"level":"info","ts":"2024-05-07T18:05:46.324224Z","caller":"traceutil/trace.go:171","msg":"trace[196822800] range","detail":"{range_begin:/registry/leases/kube-system/external-health-monitor-leader-hostpath-csi-k8s-io; range_end:; response_count:1; response_revision:1406; }","duration":"238.841503ms","start":"2024-05-07T18:05:46.085376Z","end":"2024-05-07T18:05:46.324217Z","steps":["trace[196822800] 'agreement among raft nodes before linearized reading'  (duration: 238.782097ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-07T18:05:46.325169Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"266.686988ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/leases/kube-system/snapshot-controller-leader\" ","response":"range_response_count:1 size:498"}
	{"level":"info","ts":"2024-05-07T18:05:46.325195Z","caller":"traceutil/trace.go:171","msg":"trace[701904311] range","detail":"{range_begin:/registry/leases/kube-system/snapshot-controller-leader; range_end:; response_count:1; response_revision:1406; }","duration":"266.714691ms","start":"2024-05-07T18:05:46.058473Z","end":"2024-05-07T18:05:46.325187Z","steps":["trace[701904311] 'agreement among raft nodes before linearized reading'  (duration: 266.640383ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-07T18:05:46.325485Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"382.424192ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:18 size:87276"}
	{"level":"info","ts":"2024-05-07T18:05:46.325511Z","caller":"traceutil/trace.go:171","msg":"trace[1338755320] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:18; response_revision:1406; }","duration":"382.477898ms","start":"2024-05-07T18:05:45.943025Z","end":"2024-05-07T18:05:46.325503Z","steps":["trace[1338755320] 'agreement among raft nodes before linearized reading'  (duration: 379.221749ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-07T18:05:46.32553Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-07T18:05:45.943009Z","time spent":"382.516902ms","remote":"127.0.0.1:56014","response type":"/etcdserverpb.KV/Range","request count":0,"request size":58,"response count":18,"response size":87300,"request content":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" "}
	{"level":"info","ts":"2024-05-07T18:05:46.470264Z","caller":"traceutil/trace.go:171","msg":"trace[1293606906] transaction","detail":"{read_only:false; number_of_response:1; response_revision:1407; }","duration":"139.868593ms","start":"2024-05-07T18:05:46.330379Z","end":"2024-05-07T18:05:46.470248Z","steps":["trace[1293606906] 'process raft request'  (duration: 136.099989ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-07T18:05:46.470392Z","caller":"traceutil/trace.go:171","msg":"trace[1429209713] transaction","detail":"{read_only:false; response_revision:1409; number_of_response:1; }","duration":"127.862007ms","start":"2024-05-07T18:05:46.342407Z","end":"2024-05-07T18:05:46.470269Z","steps":["trace[1429209713] 'process raft request'  (duration: 127.832403ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-07T18:05:46.47055Z","caller":"traceutil/trace.go:171","msg":"trace[1264931688] linearizableReadLoop","detail":"{readStateIndex:1473; appliedIndex:1472; }","duration":"139.921499ms","start":"2024-05-07T18:05:46.330621Z","end":"2024-05-07T18:05:46.470542Z","steps":["trace[1264931688] 'read index received'  (duration: 135.866664ms)","trace[1264931688] 'applied index is now lower than readState.Index'  (duration: 4.054035ms)"],"step_count":2}
	{"level":"warn","ts":"2024-05-07T18:05:46.470603Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"139.968405ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-07T18:05:46.470622Z","caller":"traceutil/trace.go:171","msg":"trace[109932931] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1409; }","duration":"140.013709ms","start":"2024-05-07T18:05:46.330603Z","end":"2024-05-07T18:05:46.470616Z","steps":["trace[109932931] 'agreement among raft nodes before linearized reading'  (duration: 139.966004ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-07T18:05:46.470809Z","caller":"traceutil/trace.go:171","msg":"trace[829593541] transaction","detail":"{read_only:false; response_revision:1408; number_of_response:1; }","duration":"128.456971ms","start":"2024-05-07T18:05:46.342344Z","end":"2024-05-07T18:05:46.470801Z","steps":["trace[829593541] 'process raft request'  (duration: 127.856306ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-07T18:05:48.465658Z","caller":"traceutil/trace.go:171","msg":"trace[630790638] transaction","detail":"{read_only:false; response_revision:1422; number_of_response:1; }","duration":"121.112513ms","start":"2024-05-07T18:05:48.344529Z","end":"2024-05-07T18:05:48.465641Z","steps":["trace[630790638] 'process raft request'  (duration: 112.88973ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-07T18:05:49.674949Z","caller":"traceutil/trace.go:171","msg":"trace[1266531765] transaction","detail":"{read_only:false; response_revision:1429; number_of_response:1; }","duration":"240.326532ms","start":"2024-05-07T18:05:49.434602Z","end":"2024-05-07T18:05:49.674929Z","steps":["trace[1266531765] 'process raft request'  (duration: 240.224521ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-07T18:05:52.372657Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"282.743883ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:2 size:6515"}
	{"level":"info","ts":"2024-05-07T18:05:52.372724Z","caller":"traceutil/trace.go:171","msg":"trace[593425077] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:2; response_revision:1442; }","duration":"282.851895ms","start":"2024-05-07T18:05:52.089856Z","end":"2024-05-07T18:05:52.372708Z","steps":["trace[593425077] 'range keys from in-memory index tree'  (duration: 282.466754ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-07T18:05:54.775889Z","caller":"traceutil/trace.go:171","msg":"trace[767489325] transaction","detail":"{read_only:false; response_revision:1448; number_of_response:1; }","duration":"268.715693ms","start":"2024-05-07T18:05:54.507156Z","end":"2024-05-07T18:05:54.775871Z","steps":["trace[767489325] 'process raft request'  (duration: 268.620283ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-07T18:05:55.316783Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"213.28062ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:2 size:6591"}
	{"level":"info","ts":"2024-05-07T18:05:55.316845Z","caller":"traceutil/trace.go:171","msg":"trace[278053078] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:2; response_revision:1449; }","duration":"213.378531ms","start":"2024-05-07T18:05:55.103451Z","end":"2024-05-07T18:05:55.316829Z","steps":["trace[278053078] 'range keys from in-memory index tree'  (duration: 213.135205ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-07T18:05:55.317046Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"170.001349ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/\" range_end:\"/registry/configmaps0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2024-05-07T18:05:55.317096Z","caller":"traceutil/trace.go:171","msg":"trace[155260996] range","detail":"{range_begin:/registry/configmaps/; range_end:/registry/configmaps0; response_count:0; response_revision:1449; }","duration":"170.088959ms","start":"2024-05-07T18:05:55.146994Z","end":"2024-05-07T18:05:55.317083Z","steps":["trace[155260996] 'count revisions from in-memory index tree'  (duration: 169.941143ms)"],"step_count":1}
	
	
	==> gcp-auth [d940fa2a879d] <==
	2024/05/07 18:05:22 GCP Auth Webhook started!
	2024/05/07 18:05:23 Ready to marshal response ...
	2024/05/07 18:05:23 Ready to write response ...
	2024/05/07 18:05:23 Ready to marshal response ...
	2024/05/07 18:05:23 Ready to write response ...
	2024/05/07 18:05:34 Ready to marshal response ...
	2024/05/07 18:05:34 Ready to write response ...
	2024/05/07 18:05:39 Ready to marshal response ...
	2024/05/07 18:05:39 Ready to write response ...
	2024/05/07 18:05:46 Ready to marshal response ...
	2024/05/07 18:05:46 Ready to write response ...
	2024/05/07 18:05:49 Ready to marshal response ...
	2024/05/07 18:05:49 Ready to write response ...
	2024/05/07 18:06:18 Ready to marshal response ...
	2024/05/07 18:06:18 Ready to write response ...
	
	
	==> kernel <==
	 18:06:36 up 6 min,  0 users,  load average: 2.43, 1.96, 0.91
	Linux addons-809100 5.10.207 #1 SMP Tue Apr 30 22:38:43 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [0f3f2ce7c6fa] <==
	Trace[530652041]: ["List(recursive=true) etcd3" audit-id:c68e2f2f-d030-4bc4-9c54-893c18850ec1,key:/pods/gcp-auth,resourceVersion:,resourceVersionMatch:,limit:0,continue: 957ms (18:05:17.914)]
	Trace[530652041]: [957.782136ms] [957.782136ms] END
	I0507 18:05:18.873369       1 trace.go:236] Trace[955689900]: "Get" accept:application/json, */*,audit-id:3bc03aca-627c-4c5b-b081-d9c3656bb775,client:172.19.135.136,api-group:,api-version:v1,name:k8s.io-minikube-hostpath,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (07-May-2024 18:05:17.977) (total time: 896ms):
	Trace[955689900]: ---"About to write a response" 896ms (18:05:18.873)
	Trace[955689900]: [896.331355ms] [896.331355ms] END
	I0507 18:05:18.874746       1 trace.go:236] Trace[2059468945]: "List" accept:application/json, */*,audit-id:64b3c082-300f-47cb-8dbc-3aa1b5d9560e,client:172.19.128.1,api-group:,api-version:v1,name:,subresource:,namespace:ingress-nginx,protocol:HTTP/2.0,resource:pods,scope:namespace,url:/api/v1/namespaces/ingress-nginx/pods,user-agent:minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format,verb:LIST (07-May-2024 18:05:18.163) (total time: 710ms):
	Trace[2059468945]: ["List(recursive=true) etcd3" audit-id:64b3c082-300f-47cb-8dbc-3aa1b5d9560e,key:/pods/ingress-nginx,resourceVersion:,resourceVersionMatch:,limit:0,continue: 710ms (18:05:18.164)]
	Trace[2059468945]: [710.725564ms] [710.725564ms] END
	I0507 18:05:45.762974       1 trace.go:236] Trace[170251080]: "Patch" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:88abe5ee-2ffc-495e-90c9-469325a8f348,client:172.19.135.136,api-group:,api-version:v1,name:nvidia-device-plugin-daemonset-qk4df.17cd46dfc9a232ec,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:events,scope:resource,url:/api/v1/namespaces/kube-system/events/nvidia-device-plugin-daemonset-qk4df.17cd46dfc9a232ec,user-agent:kubelet/v1.30.0 (linux/amd64) kubernetes/7c48c2b,verb:PATCH (07-May-2024 18:05:45.092) (total time: 670ms):
	Trace[170251080]: ["GuaranteedUpdate etcd3" audit-id:88abe5ee-2ffc-495e-90c9-469325a8f348,key:/events/kube-system/nvidia-device-plugin-daemonset-qk4df.17cd46dfc9a232ec,type:*core.Event,resource:events 669ms (18:05:45.093)
	Trace[170251080]:  ---"Txn call completed" 667ms (18:05:45.762)]
	Trace[170251080]: ---"Object stored in database" 667ms (18:05:45.762)
	Trace[170251080]: [670.139608ms] [670.139608ms] END
	I0507 18:05:46.326092       1 trace.go:236] Trace[754906519]: "Update" accept:application/json, */*,audit-id:b06e94e9-a668-47c6-bf61-e57861552bdb,client:172.19.135.136,api-group:,api-version:v1,name:k8s.io-minikube-hostpath,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:endpoints,scope:resource,url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (07-May-2024 18:05:45.765) (total time: 560ms):
	Trace[754906519]: ["GuaranteedUpdate etcd3" audit-id:b06e94e9-a668-47c6-bf61-e57861552bdb,key:/services/endpoints/kube-system/k8s.io-minikube-hostpath,type:*core.Endpoints,resource:endpoints 560ms (18:05:45.765)
	Trace[754906519]:  ---"Txn call completed" 559ms (18:05:46.325)]
	Trace[754906519]: [560.56198ms] [560.56198ms] END
	I0507 18:05:46.474655       1 trace.go:236] Trace[1264543146]: "Delete" accept:application/json,audit-id:ea1ef899-22fb-417e-ae37-f8d41e7d38c7,client:172.19.128.1,api-group:,api-version:v1,name:test-local-path,subresource:,namespace:default,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/default/pods/test-local-path,user-agent:kubectl/v1.30.0 (windows/amd64) kubernetes/7c48c2b,verb:DELETE (07-May-2024 18:05:45.565) (total time: 908ms):
	Trace[1264543146]: ["GuaranteedUpdate etcd3" audit-id:ea1ef899-22fb-417e-ae37-f8d41e7d38c7,key:/pods/default/test-local-path,type:*core.Pod,resource:pods 711ms (18:05:45.762)
	Trace[1264543146]:  ---"Txn call completed" 563ms (18:05:46.326)]
	Trace[1264543146]: ---"Object deleted from database" 147ms (18:05:46.474)
	Trace[1264543146]: [908.956619ms] [908.956619ms] END
	I0507 18:05:48.383208       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	E0507 18:05:55.744271       1 upgradeaware.go:427] Error proxying data from client to backend: read tcp 172.19.135.136:8443->10.244.0.28:45996: read: connection reset by peer
	I0507 18:05:58.283377       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [0c2fdf9034fb] <==
	I0507 18:04:42.367925       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	I0507 18:04:42.388526       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0507 18:04:42.620636       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	I0507 18:04:42.726507       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0507 18:04:42.755675       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0507 18:04:42.768773       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0507 18:04:43.381864       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	I0507 18:04:43.394239       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	I0507 18:04:43.405138       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	I0507 18:04:43.445866       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch"
	I0507 18:04:52.183455       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/snapshot-controller-745499f584" duration="26.578727ms"
	I0507 18:04:52.184459       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/snapshot-controller-745499f584" duration="53.107µs"
	I0507 18:05:11.027163       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0507 18:05:11.086706       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-create"
	I0507 18:05:12.018917       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0507 18:05:12.063649       1 job_controller.go:566] "enqueueing job" logger="job-controller" key="gcp-auth/gcp-auth-certs-patch"
	I0507 18:05:20.611813       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-768f948f8f" duration="58.006µs"
	I0507 18:05:22.695569       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-5db96cd9b4" duration="19.620532ms"
	I0507 18:05:22.695645       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="gcp-auth/gcp-auth-5db96cd9b4" duration="31.703µs"
	I0507 18:05:33.180156       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-768f948f8f" duration="40.532564ms"
	I0507 18:05:33.180483       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-768f948f8f" duration="98.811µs"
	I0507 18:05:43.570913       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-c59844bb4" duration="3.701µs"
	I0507 18:06:02.040772       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-8d985888d" duration="5.701µs"
	I0507 18:06:10.662033       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/tiller-deploy-6677d64bcd" duration="4.601µs"
	I0507 18:06:17.700638       1 replica_set.go:676] "Finished syncing" logger="replicationcontroller-controller" kind="ReplicationController" key="kube-system/registry" duration="28.503µs"
	
	
	==> kube-proxy [565b1db151c4] <==
	I0507 18:02:26.530257       1 server_linux.go:69] "Using iptables proxy"
	I0507 18:02:26.674779       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.19.135.136"]
	I0507 18:02:27.135847       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0507 18:02:27.135890       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0507 18:02:27.135912       1 server_linux.go:165] "Using iptables Proxier"
	I0507 18:02:27.180478       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0507 18:02:27.180721       1 server.go:872] "Version info" version="v1.30.0"
	I0507 18:02:27.180739       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0507 18:02:27.232866       1 config.go:192] "Starting service config controller"
	I0507 18:02:27.232889       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0507 18:02:27.232933       1 config.go:101] "Starting endpoint slice config controller"
	I0507 18:02:27.232941       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0507 18:02:27.234089       1 config.go:319] "Starting node config controller"
	I0507 18:02:27.234121       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0507 18:02:27.333734       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0507 18:02:27.333872       1 shared_informer.go:320] Caches are synced for service config
	I0507 18:02:27.434462       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [b4eb33bb7159] <==
	W0507 18:02:02.868843       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0507 18:02:02.868879       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0507 18:02:02.889175       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0507 18:02:02.889270       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0507 18:02:02.977230       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0507 18:02:02.977272       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0507 18:02:02.993155       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0507 18:02:02.993232       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0507 18:02:03.056231       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0507 18:02:03.056482       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0507 18:02:03.099767       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0507 18:02:03.099808       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0507 18:02:03.146598       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0507 18:02:03.146705       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0507 18:02:03.191551       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0507 18:02:03.191594       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0507 18:02:03.286318       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0507 18:02:03.286493       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0507 18:02:03.335103       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0507 18:02:03.335874       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0507 18:02:03.366446       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0507 18:02:03.366486       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0507 18:02:03.428162       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0507 18:02:03.428577       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0507 18:02:06.070669       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 07 18:06:28 addons-809100 kubelet[2106]: I0507 18:06:28.088083    2106 scope.go:117] "RemoveContainer" containerID="9bd07d27b2f13f74e9adec28c577d420072f210243ba2880e6f08c25838d7bc3"
	May 07 18:06:28 addons-809100 kubelet[2106]: E0507 18:06:28.088639    2106 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"gadget\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=gadget pod=gadget-kzdfd_gadget(27a1055a-971e-4151-97f3-b69f110173be)\"" pod="gadget/gadget-kzdfd" podUID="27a1055a-971e-4151-97f3-b69f110173be"
	May 07 18:06:29 addons-809100 kubelet[2106]: I0507 18:06:29.111141    2106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a059f5f-dfcc-4150-a11f-ed5d7c9e895d" path="/var/lib/kubelet/pods/0a059f5f-dfcc-4150-a11f-ed5d7c9e895d/volumes"
	May 07 18:06:32 addons-809100 kubelet[2106]: I0507 18:06:32.618421    2106 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4tqjv\" (UniqueName: \"kubernetes.io/projected/cee41230-80e6-4364-9c36-7b960d3c2abd-kube-api-access-4tqjv\") pod \"cee41230-80e6-4364-9c36-7b960d3c2abd\" (UID: \"cee41230-80e6-4364-9c36-7b960d3c2abd\") "
	May 07 18:06:32 addons-809100 kubelet[2106]: I0507 18:06:32.618664    2106 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cee41230-80e6-4364-9c36-7b960d3c2abd-config-volume\") pod \"cee41230-80e6-4364-9c36-7b960d3c2abd\" (UID: \"cee41230-80e6-4364-9c36-7b960d3c2abd\") "
	May 07 18:06:32 addons-809100 kubelet[2106]: I0507 18:06:32.619739    2106 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cee41230-80e6-4364-9c36-7b960d3c2abd-config-volume" (OuterVolumeSpecName: "config-volume") pod "cee41230-80e6-4364-9c36-7b960d3c2abd" (UID: "cee41230-80e6-4364-9c36-7b960d3c2abd"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	May 07 18:06:32 addons-809100 kubelet[2106]: I0507 18:06:32.627628    2106 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cee41230-80e6-4364-9c36-7b960d3c2abd-kube-api-access-4tqjv" (OuterVolumeSpecName: "kube-api-access-4tqjv") pod "cee41230-80e6-4364-9c36-7b960d3c2abd" (UID: "cee41230-80e6-4364-9c36-7b960d3c2abd"). InnerVolumeSpecName "kube-api-access-4tqjv". PluginName "kubernetes.io/projected", VolumeGidValue ""
	May 07 18:06:32 addons-809100 kubelet[2106]: I0507 18:06:32.719963    2106 reconciler_common.go:289] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cee41230-80e6-4364-9c36-7b960d3c2abd-config-volume\") on node \"addons-809100\" DevicePath \"\""
	May 07 18:06:32 addons-809100 kubelet[2106]: I0507 18:06:32.720077    2106 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-4tqjv\" (UniqueName: \"kubernetes.io/projected/cee41230-80e6-4364-9c36-7b960d3c2abd-kube-api-access-4tqjv\") on node \"addons-809100\" DevicePath \"\""
	May 07 18:06:33 addons-809100 kubelet[2106]: I0507 18:06:33.055742    2106 scope.go:117] "RemoveContainer" containerID="575f440a16353dd790dc07abf6471e9519d9aa31aafd3470585954e28f509750"
	May 07 18:06:33 addons-809100 kubelet[2106]: I0507 18:06:33.109033    2106 scope.go:117] "RemoveContainer" containerID="575f440a16353dd790dc07abf6471e9519d9aa31aafd3470585954e28f509750"
	May 07 18:06:33 addons-809100 kubelet[2106]: E0507 18:06:33.111735    2106 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 575f440a16353dd790dc07abf6471e9519d9aa31aafd3470585954e28f509750" containerID="575f440a16353dd790dc07abf6471e9519d9aa31aafd3470585954e28f509750"
	May 07 18:06:33 addons-809100 kubelet[2106]: I0507 18:06:33.111873    2106 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"575f440a16353dd790dc07abf6471e9519d9aa31aafd3470585954e28f509750"} err="failed to get container status \"575f440a16353dd790dc07abf6471e9519d9aa31aafd3470585954e28f509750\": rpc error: code = Unknown desc = Error response from daemon: No such container: 575f440a16353dd790dc07abf6471e9519d9aa31aafd3470585954e28f509750"
	May 07 18:06:35 addons-809100 kubelet[2106]: I0507 18:06:35.108013    2106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cee41230-80e6-4364-9c36-7b960d3c2abd" path="/var/lib/kubelet/pods/cee41230-80e6-4364-9c36-7b960d3c2abd/volumes"
	May 07 18:06:36 addons-809100 kubelet[2106]: I0507 18:06:36.134309    2106 scope.go:117] "RemoveContainer" containerID="4add077f5d7305497d8095407ae32f10994e50a4a30641e288be2e6a5f472041"
	May 07 18:06:36 addons-809100 kubelet[2106]: I0507 18:06:36.154726    2106 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9jg9\" (UniqueName: \"kubernetes.io/projected/78ef06d5-ca8a-4eff-9c6c-77a168170787-kube-api-access-x9jg9\") pod \"78ef06d5-ca8a-4eff-9c6c-77a168170787\" (UID: \"78ef06d5-ca8a-4eff-9c6c-77a168170787\") "
	May 07 18:06:36 addons-809100 kubelet[2106]: I0507 18:06:36.154804    2106 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"device-plugin\" (UniqueName: \"kubernetes.io/host-path/78ef06d5-ca8a-4eff-9c6c-77a168170787-device-plugin\") pod \"78ef06d5-ca8a-4eff-9c6c-77a168170787\" (UID: \"78ef06d5-ca8a-4eff-9c6c-77a168170787\") "
	May 07 18:06:36 addons-809100 kubelet[2106]: I0507 18:06:36.154901    2106 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/78ef06d5-ca8a-4eff-9c6c-77a168170787-device-plugin" (OuterVolumeSpecName: "device-plugin") pod "78ef06d5-ca8a-4eff-9c6c-77a168170787" (UID: "78ef06d5-ca8a-4eff-9c6c-77a168170787"). InnerVolumeSpecName "device-plugin". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	May 07 18:06:36 addons-809100 kubelet[2106]: I0507 18:06:36.165538    2106 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78ef06d5-ca8a-4eff-9c6c-77a168170787-kube-api-access-x9jg9" (OuterVolumeSpecName: "kube-api-access-x9jg9") pod "78ef06d5-ca8a-4eff-9c6c-77a168170787" (UID: "78ef06d5-ca8a-4eff-9c6c-77a168170787"). InnerVolumeSpecName "kube-api-access-x9jg9". PluginName "kubernetes.io/projected", VolumeGidValue ""
	May 07 18:06:36 addons-809100 kubelet[2106]: I0507 18:06:36.187310    2106 scope.go:117] "RemoveContainer" containerID="4add077f5d7305497d8095407ae32f10994e50a4a30641e288be2e6a5f472041"
	May 07 18:06:36 addons-809100 kubelet[2106]: E0507 18:06:36.188578    2106 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error response from daemon: No such container: 4add077f5d7305497d8095407ae32f10994e50a4a30641e288be2e6a5f472041" containerID="4add077f5d7305497d8095407ae32f10994e50a4a30641e288be2e6a5f472041"
	May 07 18:06:36 addons-809100 kubelet[2106]: I0507 18:06:36.188608    2106 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"docker","ID":"4add077f5d7305497d8095407ae32f10994e50a4a30641e288be2e6a5f472041"} err="failed to get container status \"4add077f5d7305497d8095407ae32f10994e50a4a30641e288be2e6a5f472041\": rpc error: code = Unknown desc = Error response from daemon: No such container: 4add077f5d7305497d8095407ae32f10994e50a4a30641e288be2e6a5f472041"
	May 07 18:06:36 addons-809100 kubelet[2106]: I0507 18:06:36.255547    2106 reconciler_common.go:289] "Volume detached for volume \"device-plugin\" (UniqueName: \"kubernetes.io/host-path/78ef06d5-ca8a-4eff-9c6c-77a168170787-device-plugin\") on node \"addons-809100\" DevicePath \"\""
	May 07 18:06:36 addons-809100 kubelet[2106]: I0507 18:06:36.255785    2106 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-x9jg9\" (UniqueName: \"kubernetes.io/projected/78ef06d5-ca8a-4eff-9c6c-77a168170787-kube-api-access-x9jg9\") on node \"addons-809100\" DevicePath \"\""
	May 07 18:06:37 addons-809100 kubelet[2106]: I0507 18:06:37.107065    2106 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78ef06d5-ca8a-4eff-9c6c-77a168170787" path="/var/lib/kubelet/pods/78ef06d5-ca8a-4eff-9c6c-77a168170787/volumes"
	
	
	==> storage-provisioner [99d74d5bfcd6] <==
	I0507 18:02:48.052127       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0507 18:02:48.097862       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0507 18:02:48.097979       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0507 18:02:48.157230       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0507 18:02:48.157663       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"fad1ba13-de54-4d35-852d-8a65626c31ad", APIVersion:"v1", ResourceVersion:"763", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-809100_5891ed13-23dd-4351-b005-465aa677c2cc became leader
	I0507 18:02:48.158375       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-809100_5891ed13-23dd-4351-b005-465aa677c2cc!
	I0507 18:02:48.259536       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-809100_5891ed13-23dd-4351-b005-465aa677c2cc!
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0507 18:06:28.922122    1568 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-809100 -n addons-809100
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p addons-809100 -n addons-809100: (11.2672653s)
helpers_test.go:261: (dbg) Run:  kubectl --context addons-809100 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-g6g5z ingress-nginx-admission-patch-bfvb8
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-809100 describe pod ingress-nginx-admission-create-g6g5z ingress-nginx-admission-patch-bfvb8
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-809100 describe pod ingress-nginx-admission-create-g6g5z ingress-nginx-admission-patch-bfvb8: exit status 1 (148.6675ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-g6g5z" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-bfvb8" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-809100 describe pod ingress-nginx-admission-create-g6g5z ingress-nginx-admission-patch-bfvb8: exit status 1
--- FAIL: TestAddons/parallel/Registry (86.48s)

                                                
                                    
x
+
TestErrorSpam/setup (176.75s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-751800 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-751800 --driver=hyperv
E0507 18:10:23.020257    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\client.crt: The system cannot find the path specified.
E0507 18:10:23.034700    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\client.crt: The system cannot find the path specified.
E0507 18:10:23.050059    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\client.crt: The system cannot find the path specified.
E0507 18:10:23.081364    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\client.crt: The system cannot find the path specified.
E0507 18:10:23.127150    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\client.crt: The system cannot find the path specified.
E0507 18:10:23.221635    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\client.crt: The system cannot find the path specified.
E0507 18:10:23.393586    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\client.crt: The system cannot find the path specified.
E0507 18:10:23.719087    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\client.crt: The system cannot find the path specified.
E0507 18:10:24.367634    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\client.crt: The system cannot find the path specified.
E0507 18:10:25.652355    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\client.crt: The system cannot find the path specified.
E0507 18:10:28.218321    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\client.crt: The system cannot find the path specified.
E0507 18:10:33.341050    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\client.crt: The system cannot find the path specified.
E0507 18:10:43.597865    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\client.crt: The system cannot find the path specified.
E0507 18:11:04.080677    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\client.crt: The system cannot find the path specified.
E0507 18:11:45.056132    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\client.crt: The system cannot find the path specified.
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-751800 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-751800 --driver=hyperv: (2m56.7545865s)
error_spam_test.go:96: unexpected stderr: "W0507 18:09:59.944748    5356 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube5\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."
error_spam_test.go:110: minikube stdout:
* [nospam-751800] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
- KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
- MINIKUBE_LOCATION=18804
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the hyperv driver based on user configuration
* Starting "nospam-751800" primary control-plane node in "nospam-751800" cluster
* Creating hyperv VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
* Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-751800" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
W0507 18:09:59.944748    5356 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
--- FAIL: TestErrorSpam/setup (176.75s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (30.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:731: link out/minikube-windows-amd64.exe out\kubectl.exe: Cannot create a file when that file already exists.
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-527400 -n functional-527400
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-527400 -n functional-527400: (10.942474s)
helpers_test.go:244: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-527400 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p functional-527400 logs -n 25: (7.5693187s)
helpers_test.go:252: TestFunctional/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                            Args                             |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| pause   | nospam-751800 --log_dir                                     | nospam-751800     | minikube5\jenkins | v1.33.0 | 07 May 24 18:13 UTC | 07 May 24 18:14 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-751800 |                   |                   |         |                     |                     |
	|         | pause                                                       |                   |                   |         |                     |                     |
	| unpause | nospam-751800 --log_dir                                     | nospam-751800     | minikube5\jenkins | v1.33.0 | 07 May 24 18:14 UTC | 07 May 24 18:14 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-751800 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-751800 --log_dir                                     | nospam-751800     | minikube5\jenkins | v1.33.0 | 07 May 24 18:14 UTC | 07 May 24 18:14 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-751800 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| unpause | nospam-751800 --log_dir                                     | nospam-751800     | minikube5\jenkins | v1.33.0 | 07 May 24 18:14 UTC | 07 May 24 18:14 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-751800 |                   |                   |         |                     |                     |
	|         | unpause                                                     |                   |                   |         |                     |                     |
	| stop    | nospam-751800 --log_dir                                     | nospam-751800     | minikube5\jenkins | v1.33.0 | 07 May 24 18:14 UTC | 07 May 24 18:14 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-751800 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-751800 --log_dir                                     | nospam-751800     | minikube5\jenkins | v1.33.0 | 07 May 24 18:14 UTC | 07 May 24 18:15 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-751800 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| stop    | nospam-751800 --log_dir                                     | nospam-751800     | minikube5\jenkins | v1.33.0 | 07 May 24 18:15 UTC | 07 May 24 18:15 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-751800 |                   |                   |         |                     |                     |
	|         | stop                                                        |                   |                   |         |                     |                     |
	| delete  | -p nospam-751800                                            | nospam-751800     | minikube5\jenkins | v1.33.0 | 07 May 24 18:15 UTC | 07 May 24 18:15 UTC |
	| start   | -p functional-527400                                        | functional-527400 | minikube5\jenkins | v1.33.0 | 07 May 24 18:15 UTC | 07 May 24 18:18 UTC |
	|         | --memory=4000                                               |                   |                   |         |                     |                     |
	|         | --apiserver-port=8441                                       |                   |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv                                  |                   |                   |         |                     |                     |
	| start   | -p functional-527400                                        | functional-527400 | minikube5\jenkins | v1.33.0 | 07 May 24 18:18 UTC | 07 May 24 18:20 UTC |
	|         | --alsologtostderr -v=8                                      |                   |                   |         |                     |                     |
	| cache   | functional-527400 cache add                                 | functional-527400 | minikube5\jenkins | v1.33.0 | 07 May 24 18:20 UTC | 07 May 24 18:20 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | functional-527400 cache add                                 | functional-527400 | minikube5\jenkins | v1.33.0 | 07 May 24 18:20 UTC | 07 May 24 18:20 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | functional-527400 cache add                                 | functional-527400 | minikube5\jenkins | v1.33.0 | 07 May 24 18:20 UTC | 07 May 24 18:21 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-527400 cache add                                 | functional-527400 | minikube5\jenkins | v1.33.0 | 07 May 24 18:21 UTC | 07 May 24 18:21 UTC |
	|         | minikube-local-cache-test:functional-527400                 |                   |                   |         |                     |                     |
	| cache   | functional-527400 cache delete                              | functional-527400 | minikube5\jenkins | v1.33.0 | 07 May 24 18:21 UTC | 07 May 24 18:21 UTC |
	|         | minikube-local-cache-test:functional-527400                 |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube5\jenkins | v1.33.0 | 07 May 24 18:21 UTC | 07 May 24 18:21 UTC |
	|         | registry.k8s.io/pause:3.3                                   |                   |                   |         |                     |                     |
	| cache   | list                                                        | minikube          | minikube5\jenkins | v1.33.0 | 07 May 24 18:21 UTC | 07 May 24 18:21 UTC |
	| ssh     | functional-527400 ssh sudo                                  | functional-527400 | minikube5\jenkins | v1.33.0 | 07 May 24 18:21 UTC | 07 May 24 18:21 UTC |
	|         | crictl images                                               |                   |                   |         |                     |                     |
	| ssh     | functional-527400                                           | functional-527400 | minikube5\jenkins | v1.33.0 | 07 May 24 18:21 UTC | 07 May 24 18:21 UTC |
	|         | ssh sudo docker rmi                                         |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| ssh     | functional-527400 ssh                                       | functional-527400 | minikube5\jenkins | v1.33.0 | 07 May 24 18:21 UTC |                     |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | functional-527400 cache reload                              | functional-527400 | minikube5\jenkins | v1.33.0 | 07 May 24 18:21 UTC | 07 May 24 18:21 UTC |
	| ssh     | functional-527400 ssh                                       | functional-527400 | minikube5\jenkins | v1.33.0 | 07 May 24 18:21 UTC | 07 May 24 18:21 UTC |
	|         | sudo crictl inspecti                                        |                   |                   |         |                     |                     |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube5\jenkins | v1.33.0 | 07 May 24 18:21 UTC | 07 May 24 18:21 UTC |
	|         | registry.k8s.io/pause:3.1                                   |                   |                   |         |                     |                     |
	| cache   | delete                                                      | minikube          | minikube5\jenkins | v1.33.0 | 07 May 24 18:21 UTC | 07 May 24 18:21 UTC |
	|         | registry.k8s.io/pause:latest                                |                   |                   |         |                     |                     |
	| kubectl | functional-527400 kubectl --                                | functional-527400 | minikube5\jenkins | v1.33.0 | 07 May 24 18:21 UTC | 07 May 24 18:21 UTC |
	|         | --context functional-527400                                 |                   |                   |         |                     |                     |
	|         | get pods                                                    |                   |                   |         |                     |                     |
	|---------|-------------------------------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/07 18:18:41
	Running on machine: minikube5
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0507 18:18:41.908360   11760 out.go:291] Setting OutFile to fd 972 ...
	I0507 18:18:41.909331   11760 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 18:18:41.909331   11760 out.go:304] Setting ErrFile to fd 820...
	I0507 18:18:41.909433   11760 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 18:18:41.931855   11760 out.go:298] Setting JSON to false
	I0507 18:18:41.935857   11760 start.go:129] hostinfo: {"hostname":"minikube5","uptime":21840,"bootTime":1715084081,"procs":197,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0507 18:18:41.935857   11760 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0507 18:18:41.940442   11760 out.go:177] * [functional-527400] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0507 18:18:41.941880   11760 notify.go:220] Checking for updates...
	I0507 18:18:41.945622   11760 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0507 18:18:41.948109   11760 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0507 18:18:41.950035   11760 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0507 18:18:41.952381   11760 out.go:177]   - MINIKUBE_LOCATION=18804
	I0507 18:18:41.955125   11760 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0507 18:18:41.958292   11760 config.go:182] Loaded profile config "functional-527400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 18:18:41.958957   11760 driver.go:392] Setting default libvirt URI to qemu:///system
	I0507 18:18:46.673791   11760 out.go:177] * Using the hyperv driver based on existing profile
	I0507 18:18:46.676003   11760 start.go:297] selected driver: hyperv
	I0507 18:18:46.676065   11760 start.go:901] validating driver "hyperv" against &{Name:functional-527400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.30.0 ClusterName:functional-527400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.129.80 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PV
ersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0507 18:18:46.676065   11760 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0507 18:18:46.718085   11760 cni.go:84] Creating CNI manager for ""
	I0507 18:18:46.718085   11760 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0507 18:18:46.718998   11760 start.go:340] cluster config:
	{Name:functional-527400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:functional-527400 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.129.80 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mount
Port:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0507 18:18:46.718998   11760 iso.go:125] acquiring lock: {Name:mk4977609d05da04fcecf95837b3381fb1950afd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0507 18:18:46.723857   11760 out.go:177] * Starting "functional-527400" primary control-plane node in "functional-527400" cluster
	I0507 18:18:46.726812   11760 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0507 18:18:46.726812   11760 preload.go:147] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0507 18:18:46.726812   11760 cache.go:56] Caching tarball of preloaded images
	I0507 18:18:46.726812   11760 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0507 18:18:46.727428   11760 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0507 18:18:46.727428   11760 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-527400\config.json ...
	I0507 18:18:46.730017   11760 start.go:360] acquireMachinesLock for functional-527400: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 18:18:46.730205   11760 start.go:364] duration metric: took 115µs to acquireMachinesLock for "functional-527400"
	I0507 18:18:46.730336   11760 start.go:96] Skipping create...Using existing machine configuration
	I0507 18:18:46.730425   11760 fix.go:54] fixHost starting: 
	I0507 18:18:46.731064   11760 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-527400 ).state
	I0507 18:18:49.195521   11760 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:18:49.195521   11760 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:18:49.195805   11760 fix.go:112] recreateIfNeeded on functional-527400: state=Running err=<nil>
	W0507 18:18:49.195805   11760 fix.go:138] unexpected machine state, will restart: <nil>
	I0507 18:18:49.199533   11760 out.go:177] * Updating the running hyperv "functional-527400" VM ...
	I0507 18:18:49.203592   11760 machine.go:94] provisionDockerMachine start ...
	I0507 18:18:49.203592   11760 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-527400 ).state
	I0507 18:18:51.142023   11760 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:18:51.142023   11760 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:18:51.142152   11760 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-527400 ).networkadapters[0]).ipaddresses[0]
	I0507 18:18:53.405882   11760 main.go:141] libmachine: [stdout =====>] : 172.19.129.80
	
	I0507 18:18:53.406895   11760 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:18:53.410679   11760 main.go:141] libmachine: Using SSH client type: native
	I0507 18:18:53.411272   11760 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.129.80 22 <nil> <nil>}
	I0507 18:18:53.411272   11760 main.go:141] libmachine: About to run SSH command:
	hostname
	I0507 18:18:53.533082   11760 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-527400
	
	I0507 18:18:53.533082   11760 buildroot.go:166] provisioning hostname "functional-527400"
	I0507 18:18:53.533620   11760 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-527400 ).state
	I0507 18:18:55.428004   11760 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:18:55.428004   11760 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:18:55.428004   11760 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-527400 ).networkadapters[0]).ipaddresses[0]
	I0507 18:18:57.683095   11760 main.go:141] libmachine: [stdout =====>] : 172.19.129.80
	
	I0507 18:18:57.683095   11760 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:18:57.686565   11760 main.go:141] libmachine: Using SSH client type: native
	I0507 18:18:57.687149   11760 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.129.80 22 <nil> <nil>}
	I0507 18:18:57.687149   11760 main.go:141] libmachine: About to run SSH command:
	sudo hostname functional-527400 && echo "functional-527400" | sudo tee /etc/hostname
	I0507 18:18:57.826281   11760 main.go:141] libmachine: SSH cmd err, output: <nil>: functional-527400
	
	I0507 18:18:57.826863   11760 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-527400 ).state
	I0507 18:18:59.705595   11760 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:18:59.705595   11760 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:18:59.706260   11760 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-527400 ).networkadapters[0]).ipaddresses[0]
	I0507 18:19:02.010974   11760 main.go:141] libmachine: [stdout =====>] : 172.19.129.80
	
	I0507 18:19:02.010974   11760 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:19:02.014829   11760 main.go:141] libmachine: Using SSH client type: native
	I0507 18:19:02.015078   11760 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.129.80 22 <nil> <nil>}
	I0507 18:19:02.015078   11760 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-527400' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-527400/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-527400' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0507 18:19:02.145551   11760 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0507 18:19:02.145670   11760 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0507 18:19:02.145670   11760 buildroot.go:174] setting up certificates
	I0507 18:19:02.145670   11760 provision.go:84] configureAuth start
	I0507 18:19:02.145788   11760 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-527400 ).state
	I0507 18:19:04.091776   11760 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:19:04.091776   11760 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:19:04.091776   11760 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-527400 ).networkadapters[0]).ipaddresses[0]
	I0507 18:19:06.350493   11760 main.go:141] libmachine: [stdout =====>] : 172.19.129.80
	
	I0507 18:19:06.350493   11760 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:19:06.350493   11760 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-527400 ).state
	I0507 18:19:08.246570   11760 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:19:08.246570   11760 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:19:08.247726   11760 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-527400 ).networkadapters[0]).ipaddresses[0]
	I0507 18:19:10.502316   11760 main.go:141] libmachine: [stdout =====>] : 172.19.129.80
	
	I0507 18:19:10.502316   11760 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:19:10.502316   11760 provision.go:143] copyHostCerts
	I0507 18:19:10.502316   11760 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0507 18:19:10.502316   11760 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0507 18:19:10.502316   11760 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0507 18:19:10.503010   11760 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0507 18:19:10.503895   11760 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0507 18:19:10.503959   11760 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0507 18:19:10.504108   11760 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0507 18:19:10.504347   11760 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0507 18:19:10.505133   11760 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0507 18:19:10.505299   11760 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0507 18:19:10.505371   11760 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0507 18:19:10.505615   11760 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0507 18:19:10.506363   11760 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-527400 san=[127.0.0.1 172.19.129.80 functional-527400 localhost minikube]
	I0507 18:19:10.661589   11760 provision.go:177] copyRemoteCerts
	I0507 18:19:10.669740   11760 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0507 18:19:10.669740   11760 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-527400 ).state
	I0507 18:19:12.557587   11760 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:19:12.557587   11760 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:19:12.557587   11760 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-527400 ).networkadapters[0]).ipaddresses[0]
	I0507 18:19:14.803158   11760 main.go:141] libmachine: [stdout =====>] : 172.19.129.80
	
	I0507 18:19:14.803158   11760 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:19:14.803787   11760 sshutil.go:53] new ssh client: &{IP:172.19.129.80 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-527400\id_rsa Username:docker}
	I0507 18:19:14.900606   11760 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.2305747s)
	I0507 18:19:14.900606   11760 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0507 18:19:14.900606   11760 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0507 18:19:14.943563   11760 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0507 18:19:14.944218   11760 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0507 18:19:14.986522   11760 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0507 18:19:14.986751   11760 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I0507 18:19:15.027709   11760 provision.go:87] duration metric: took 12.8811518s to configureAuth
	I0507 18:19:15.027709   11760 buildroot.go:189] setting minikube options for container-runtime
	I0507 18:19:15.028710   11760 config.go:182] Loaded profile config "functional-527400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 18:19:15.028847   11760 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-527400 ).state
	I0507 18:19:16.901521   11760 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:19:16.902288   11760 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:19:16.902516   11760 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-527400 ).networkadapters[0]).ipaddresses[0]
	I0507 18:19:19.163761   11760 main.go:141] libmachine: [stdout =====>] : 172.19.129.80
	
	I0507 18:19:19.163761   11760 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:19:19.170518   11760 main.go:141] libmachine: Using SSH client type: native
	I0507 18:19:19.170518   11760 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.129.80 22 <nil> <nil>}
	I0507 18:19:19.170518   11760 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0507 18:19:19.288096   11760 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0507 18:19:19.288096   11760 buildroot.go:70] root file system type: tmpfs
	I0507 18:19:19.288331   11760 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0507 18:19:19.288331   11760 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-527400 ).state
	I0507 18:19:21.168608   11760 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:19:21.168608   11760 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:19:21.168880   11760 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-527400 ).networkadapters[0]).ipaddresses[0]
	I0507 18:19:23.428664   11760 main.go:141] libmachine: [stdout =====>] : 172.19.129.80
	
	I0507 18:19:23.428990   11760 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:19:23.432555   11760 main.go:141] libmachine: Using SSH client type: native
	I0507 18:19:23.432942   11760 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.129.80 22 <nil> <nil>}
	I0507 18:19:23.433016   11760 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0507 18:19:23.576603   11760 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0507 18:19:23.576741   11760 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-527400 ).state
	I0507 18:19:25.462471   11760 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:19:25.462471   11760 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:19:25.462777   11760 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-527400 ).networkadapters[0]).ipaddresses[0]
	I0507 18:19:27.722856   11760 main.go:141] libmachine: [stdout =====>] : 172.19.129.80
	
	I0507 18:19:27.722856   11760 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:19:27.727096   11760 main.go:141] libmachine: Using SSH client type: native
	I0507 18:19:27.727717   11760 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.129.80 22 <nil> <nil>}
	I0507 18:19:27.727717   11760 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0507 18:19:27.868525   11760 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0507 18:19:27.868525   11760 machine.go:97] duration metric: took 38.6622691s to provisionDockerMachine
	I0507 18:19:27.868525   11760 start.go:293] postStartSetup for "functional-527400" (driver="hyperv")
	I0507 18:19:27.868525   11760 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0507 18:19:27.878455   11760 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0507 18:19:27.878455   11760 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-527400 ).state
	I0507 18:19:29.755750   11760 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:19:29.755750   11760 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:19:29.755750   11760 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-527400 ).networkadapters[0]).ipaddresses[0]
	I0507 18:19:31.995070   11760 main.go:141] libmachine: [stdout =====>] : 172.19.129.80
	
	I0507 18:19:31.995070   11760 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:19:31.995569   11760 sshutil.go:53] new ssh client: &{IP:172.19.129.80 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-527400\id_rsa Username:docker}
	I0507 18:19:32.092930   11760 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.2141852s)
	I0507 18:19:32.101668   11760 ssh_runner.go:195] Run: cat /etc/os-release
	I0507 18:19:32.108158   11760 command_runner.go:130] > NAME=Buildroot
	I0507 18:19:32.108230   11760 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0507 18:19:32.108230   11760 command_runner.go:130] > ID=buildroot
	I0507 18:19:32.108230   11760 command_runner.go:130] > VERSION_ID=2023.02.9
	I0507 18:19:32.108230   11760 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0507 18:19:32.108230   11760 info.go:137] Remote host: Buildroot 2023.02.9
	I0507 18:19:32.108324   11760 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0507 18:19:32.108591   11760 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0507 18:19:32.109103   11760 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem -> 99922.pem in /etc/ssl/certs
	I0507 18:19:32.109213   11760 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem -> /etc/ssl/certs/99922.pem
	I0507 18:19:32.109892   11760 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\test\nested\copy\9992\hosts -> hosts in /etc/test/nested/copy/9992
	I0507 18:19:32.110002   11760 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\test\nested\copy\9992\hosts -> /etc/test/nested/copy/9992/hosts
	I0507 18:19:32.117675   11760 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/9992
	I0507 18:19:32.133772   11760 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem --> /etc/ssl/certs/99922.pem (1708 bytes)
	I0507 18:19:32.186790   11760 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\test\nested\copy\9992\hosts --> /etc/test/nested/copy/9992/hosts (40 bytes)
	I0507 18:19:32.235856   11760 start.go:296] duration metric: took 4.3670303s for postStartSetup
	I0507 18:19:32.235856   11760 fix.go:56] duration metric: took 45.5023382s for fixHost
	I0507 18:19:32.235856   11760 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-527400 ).state
	I0507 18:19:34.144495   11760 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:19:34.144495   11760 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:19:34.144495   11760 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-527400 ).networkadapters[0]).ipaddresses[0]
	I0507 18:19:36.438670   11760 main.go:141] libmachine: [stdout =====>] : 172.19.129.80
	
	I0507 18:19:36.438670   11760 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:19:36.442616   11760 main.go:141] libmachine: Using SSH client type: native
	I0507 18:19:36.442616   11760 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.129.80 22 <nil> <nil>}
	I0507 18:19:36.443155   11760 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0507 18:19:36.561850   11760 main.go:141] libmachine: SSH cmd err, output: <nil>: 1715105976.793681444
	
	I0507 18:19:36.561850   11760 fix.go:216] guest clock: 1715105976.793681444
	I0507 18:19:36.561850   11760 fix.go:229] Guest: 2024-05-07 18:19:36.793681444 +0000 UTC Remote: 2024-05-07 18:19:32.2358564 +0000 UTC m=+50.445070601 (delta=4.557825044s)
	I0507 18:19:36.561850   11760 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-527400 ).state
	I0507 18:19:38.446580   11760 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:19:38.447384   11760 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:19:38.447384   11760 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-527400 ).networkadapters[0]).ipaddresses[0]
	I0507 18:19:40.691000   11760 main.go:141] libmachine: [stdout =====>] : 172.19.129.80
	
	I0507 18:19:40.691000   11760 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:19:40.695476   11760 main.go:141] libmachine: Using SSH client type: native
	I0507 18:19:40.696071   11760 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.129.80 22 <nil> <nil>}
	I0507 18:19:40.696071   11760 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1715105976
	I0507 18:19:40.829666   11760 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue May  7 18:19:36 UTC 2024
	
	I0507 18:19:40.829760   11760 fix.go:236] clock set: Tue May  7 18:19:36 UTC 2024
	 (err=<nil>)
	I0507 18:19:40.829760   11760 start.go:83] releasing machines lock for "functional-527400", held for 54.0958279s
	I0507 18:19:40.830001   11760 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-527400 ).state
	I0507 18:19:42.735938   11760 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:19:42.735938   11760 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:19:42.736615   11760 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-527400 ).networkadapters[0]).ipaddresses[0]
	I0507 18:19:44.985022   11760 main.go:141] libmachine: [stdout =====>] : 172.19.129.80
	
	I0507 18:19:44.985022   11760 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:19:44.988341   11760 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0507 18:19:44.988437   11760 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-527400 ).state
	I0507 18:19:44.995772   11760 ssh_runner.go:195] Run: cat /version.json
	I0507 18:19:44.995772   11760 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-527400 ).state
	I0507 18:19:46.961432   11760 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:19:46.961738   11760 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:19:46.961738   11760 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-527400 ).networkadapters[0]).ipaddresses[0]
	I0507 18:19:46.961910   11760 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:19:46.961910   11760 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:19:46.961910   11760 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-527400 ).networkadapters[0]).ipaddresses[0]
	I0507 18:19:49.360648   11760 main.go:141] libmachine: [stdout =====>] : 172.19.129.80
	
	I0507 18:19:49.360797   11760 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:19:49.360938   11760 sshutil.go:53] new ssh client: &{IP:172.19.129.80 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-527400\id_rsa Username:docker}
	I0507 18:19:49.381606   11760 main.go:141] libmachine: [stdout =====>] : 172.19.129.80
	
	I0507 18:19:49.381606   11760 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:19:49.381606   11760 sshutil.go:53] new ssh client: &{IP:172.19.129.80 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-527400\id_rsa Username:docker}
	I0507 18:19:49.513644   11760 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0507 18:19:49.513644   11760 command_runner.go:130] > {"iso_version": "v1.33.0-1714498396-18779", "kicbase_version": "v0.0.43-1714386659-18769", "minikube_version": "v1.33.0", "commit": "0c7995ab2d4914d5c74027eee5f5d102e19316f2"}
	I0507 18:19:49.513644   11760 ssh_runner.go:235] Completed: cat /version.json: (4.5175609s)
	I0507 18:19:49.513644   11760 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.5248951s)
	I0507 18:19:49.522598   11760 ssh_runner.go:195] Run: systemctl --version
	I0507 18:19:49.529779   11760 command_runner.go:130] > systemd 252 (252)
	I0507 18:19:49.529779   11760 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0507 18:19:49.539531   11760 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0507 18:19:49.546946   11760 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0507 18:19:49.547347   11760 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0507 18:19:49.554846   11760 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0507 18:19:49.571222   11760 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0507 18:19:49.571222   11760 start.go:494] detecting cgroup driver to use...
	I0507 18:19:49.571440   11760 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0507 18:19:49.603069   11760 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0507 18:19:49.612727   11760 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0507 18:19:49.640193   11760 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0507 18:19:49.657775   11760 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0507 18:19:49.666288   11760 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0507 18:19:49.693763   11760 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0507 18:19:49.720597   11760 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0507 18:19:49.745679   11760 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0507 18:19:49.771168   11760 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0507 18:19:49.802439   11760 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0507 18:19:49.832794   11760 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0507 18:19:49.863956   11760 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0507 18:19:49.890956   11760 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0507 18:19:49.906446   11760 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0507 18:19:49.915356   11760 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0507 18:19:49.942048   11760 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 18:19:50.169097   11760 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0507 18:19:50.205174   11760 start.go:494] detecting cgroup driver to use...
	I0507 18:19:50.218779   11760 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0507 18:19:50.250156   11760 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0507 18:19:50.250156   11760 command_runner.go:130] > [Unit]
	I0507 18:19:50.250156   11760 command_runner.go:130] > Description=Docker Application Container Engine
	I0507 18:19:50.250156   11760 command_runner.go:130] > Documentation=https://docs.docker.com
	I0507 18:19:50.250279   11760 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0507 18:19:50.250279   11760 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0507 18:19:50.250279   11760 command_runner.go:130] > StartLimitBurst=3
	I0507 18:19:50.250279   11760 command_runner.go:130] > StartLimitIntervalSec=60
	I0507 18:19:50.250350   11760 command_runner.go:130] > [Service]
	I0507 18:19:50.250350   11760 command_runner.go:130] > Type=notify
	I0507 18:19:50.250350   11760 command_runner.go:130] > Restart=on-failure
	I0507 18:19:50.250391   11760 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0507 18:19:50.250391   11760 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0507 18:19:50.250391   11760 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0507 18:19:50.250391   11760 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0507 18:19:50.250391   11760 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0507 18:19:50.250391   11760 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0507 18:19:50.250391   11760 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0507 18:19:50.250391   11760 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0507 18:19:50.250391   11760 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0507 18:19:50.250391   11760 command_runner.go:130] > ExecStart=
	I0507 18:19:50.250391   11760 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0507 18:19:50.250391   11760 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0507 18:19:50.250391   11760 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0507 18:19:50.250391   11760 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0507 18:19:50.250391   11760 command_runner.go:130] > LimitNOFILE=infinity
	I0507 18:19:50.250391   11760 command_runner.go:130] > LimitNPROC=infinity
	I0507 18:19:50.250391   11760 command_runner.go:130] > LimitCORE=infinity
	I0507 18:19:50.250391   11760 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0507 18:19:50.250391   11760 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0507 18:19:50.250391   11760 command_runner.go:130] > TasksMax=infinity
	I0507 18:19:50.250391   11760 command_runner.go:130] > TimeoutStartSec=0
	I0507 18:19:50.250391   11760 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0507 18:19:50.250391   11760 command_runner.go:130] > Delegate=yes
	I0507 18:19:50.250391   11760 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0507 18:19:50.250391   11760 command_runner.go:130] > KillMode=process
	I0507 18:19:50.250391   11760 command_runner.go:130] > [Install]
	I0507 18:19:50.250391   11760 command_runner.go:130] > WantedBy=multi-user.target
	I0507 18:19:50.258947   11760 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0507 18:19:50.299279   11760 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0507 18:19:50.344567   11760 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0507 18:19:50.382069   11760 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0507 18:19:50.404605   11760 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0507 18:19:50.438822   11760 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0507 18:19:50.447919   11760 ssh_runner.go:195] Run: which cri-dockerd
	I0507 18:19:50.457529   11760 command_runner.go:130] > /usr/bin/cri-dockerd
	I0507 18:19:50.466504   11760 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0507 18:19:50.482936   11760 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0507 18:19:50.521897   11760 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0507 18:19:50.754965   11760 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0507 18:19:50.985523   11760 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0507 18:19:50.985875   11760 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0507 18:19:51.031618   11760 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 18:19:51.255161   11760 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0507 18:20:04.121517   11760 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.8654704s)
	I0507 18:20:04.134809   11760 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0507 18:20:04.168818   11760 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0507 18:20:04.215797   11760 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0507 18:20:04.249414   11760 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0507 18:20:04.460940   11760 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0507 18:20:04.664657   11760 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 18:20:04.871097   11760 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0507 18:20:04.911344   11760 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0507 18:20:04.947160   11760 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 18:20:05.149272   11760 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0507 18:20:05.271993   11760 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0507 18:20:05.281047   11760 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0507 18:20:05.289711   11760 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0507 18:20:05.289711   11760 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0507 18:20:05.289711   11760 command_runner.go:130] > Device: 0,22	Inode: 1427        Links: 1
	I0507 18:20:05.289832   11760 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0507 18:20:05.289832   11760 command_runner.go:130] > Access: 2024-05-07 18:20:05.434840212 +0000
	I0507 18:20:05.289832   11760 command_runner.go:130] > Modify: 2024-05-07 18:20:05.407835935 +0000
	I0507 18:20:05.289866   11760 command_runner.go:130] > Change: 2024-05-07 18:20:05.412836727 +0000
	I0507 18:20:05.289866   11760 command_runner.go:130] >  Birth: -
	I0507 18:20:05.289866   11760 start.go:562] Will wait 60s for crictl version
	I0507 18:20:05.299231   11760 ssh_runner.go:195] Run: which crictl
	I0507 18:20:05.305086   11760 command_runner.go:130] > /usr/bin/crictl
	I0507 18:20:05.315265   11760 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0507 18:20:05.360540   11760 command_runner.go:130] > Version:  0.1.0
	I0507 18:20:05.360651   11760 command_runner.go:130] > RuntimeName:  docker
	I0507 18:20:05.360651   11760 command_runner.go:130] > RuntimeVersion:  26.0.2
	I0507 18:20:05.360651   11760 command_runner.go:130] > RuntimeApiVersion:  v1
	I0507 18:20:05.360765   11760 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0507 18:20:05.371145   11760 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0507 18:20:05.400570   11760 command_runner.go:130] > 26.0.2
	I0507 18:20:05.410871   11760 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0507 18:20:05.438570   11760 command_runner.go:130] > 26.0.2
	I0507 18:20:05.444498   11760 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0507 18:20:05.444680   11760 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0507 18:20:05.449411   11760 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0507 18:20:05.449411   11760 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0507 18:20:05.449952   11760 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0507 18:20:05.449952   11760 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:a3:a5:4f Flags:up|broadcast|multicast|running}
	I0507 18:20:05.451825   11760 ip.go:210] interface addr: fe80::1edb:f5fd:c218:d8d2/64
	I0507 18:20:05.451825   11760 ip.go:210] interface addr: 172.19.128.1/20
	I0507 18:20:05.461412   11760 ssh_runner.go:195] Run: grep 172.19.128.1	host.minikube.internal$ /etc/hosts
	I0507 18:20:05.468027   11760 command_runner.go:130] > 172.19.128.1	host.minikube.internal
	I0507 18:20:05.468027   11760 kubeadm.go:877] updating cluster {Name:functional-527400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:
v1.30.0 ClusterName:functional-527400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.129.80 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L M
ountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0507 18:20:05.468812   11760 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0507 18:20:05.477128   11760 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0507 18:20:05.499476   11760 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.0
	I0507 18:20:05.499509   11760 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.0
	I0507 18:20:05.499552   11760 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.0
	I0507 18:20:05.499552   11760 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.0
	I0507 18:20:05.499584   11760 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0507 18:20:05.499584   11760 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0507 18:20:05.499584   11760 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0507 18:20:05.499584   11760 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0507 18:20:05.499640   11760 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0507 18:20:05.499690   11760 docker.go:615] Images already preloaded, skipping extraction
	I0507 18:20:05.506125   11760 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0507 18:20:05.527223   11760 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.0
	I0507 18:20:05.527569   11760 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.0
	I0507 18:20:05.527569   11760 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.0
	I0507 18:20:05.527569   11760 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.0
	I0507 18:20:05.527569   11760 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0507 18:20:05.527569   11760 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0507 18:20:05.527569   11760 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0507 18:20:05.527569   11760 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0507 18:20:05.527834   11760 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0507 18:20:05.528870   11760 cache_images.go:84] Images are preloaded, skipping loading
	I0507 18:20:05.528976   11760 kubeadm.go:928] updating node { 172.19.129.80 8441 v1.30.0 docker true true} ...
	I0507 18:20:05.529230   11760 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-527400 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.129.80
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:functional-527400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0507 18:20:05.536620   11760 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0507 18:20:05.568366   11760 command_runner.go:130] > cgroupfs
	I0507 18:20:05.569910   11760 cni.go:84] Creating CNI manager for ""
	I0507 18:20:05.569959   11760 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0507 18:20:05.569992   11760 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0507 18:20:05.570147   11760 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.19.129.80 APIServerPort:8441 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-527400 NodeName:functional-527400 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.19.129.80"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.19.129.80 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0507 18:20:05.570340   11760 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.19.129.80
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-527400"
	  kubeletExtraArgs:
	    node-ip: 172.19.129.80
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.19.129.80"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0507 18:20:05.578749   11760 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0507 18:20:05.596975   11760 command_runner.go:130] > kubeadm
	I0507 18:20:05.596975   11760 command_runner.go:130] > kubectl
	I0507 18:20:05.596975   11760 command_runner.go:130] > kubelet
	I0507 18:20:05.598147   11760 binaries.go:44] Found k8s binaries, skipping transfer
	I0507 18:20:05.609186   11760 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0507 18:20:05.626494   11760 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (317 bytes)
	I0507 18:20:05.655632   11760 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0507 18:20:05.686807   11760 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0507 18:20:05.728730   11760 ssh_runner.go:195] Run: grep 172.19.129.80	control-plane.minikube.internal$ /etc/hosts
	I0507 18:20:05.734859   11760 command_runner.go:130] > 172.19.129.80	control-plane.minikube.internal
	I0507 18:20:05.745860   11760 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 18:20:05.982153   11760 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0507 18:20:06.040226   11760 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-527400 for IP: 172.19.129.80
	I0507 18:20:06.040226   11760 certs.go:194] generating shared ca certs ...
	I0507 18:20:06.040226   11760 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 18:20:06.041233   11760 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0507 18:20:06.041233   11760 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0507 18:20:06.041233   11760 certs.go:256] generating profile certs ...
	I0507 18:20:06.042664   11760 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-527400\client.key
	I0507 18:20:06.042874   11760 certs.go:359] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-527400\apiserver.key.d767d43c
	I0507 18:20:06.042874   11760 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-527400\proxy-client.key
	I0507 18:20:06.042874   11760 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0507 18:20:06.043416   11760 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0507 18:20:06.043587   11760 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0507 18:20:06.043746   11760 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0507 18:20:06.043774   11760 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-527400\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0507 18:20:06.043774   11760 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-527400\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0507 18:20:06.043774   11760 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-527400\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0507 18:20:06.043774   11760 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-527400\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0507 18:20:06.044588   11760 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\9992.pem (1338 bytes)
	W0507 18:20:06.045011   11760 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\9992_empty.pem, impossibly tiny 0 bytes
	I0507 18:20:06.045089   11760 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0507 18:20:06.045359   11760 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0507 18:20:06.045599   11760 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0507 18:20:06.045830   11760 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0507 18:20:06.045968   11760 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem (1708 bytes)
	I0507 18:20:06.045968   11760 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\9992.pem -> /usr/share/ca-certificates/9992.pem
	I0507 18:20:06.045968   11760 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem -> /usr/share/ca-certificates/99922.pem
	I0507 18:20:06.046501   11760 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0507 18:20:06.047768   11760 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0507 18:20:06.097387   11760 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0507 18:20:06.150808   11760 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0507 18:20:06.197267   11760 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0507 18:20:06.250142   11760 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-527400\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0507 18:20:06.311980   11760 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-527400\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0507 18:20:06.393628   11760 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-527400\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0507 18:20:06.451846   11760 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-527400\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0507 18:20:06.505101   11760 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\9992.pem --> /usr/share/ca-certificates/9992.pem (1338 bytes)
	I0507 18:20:06.563365   11760 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem --> /usr/share/ca-certificates/99922.pem (1708 bytes)
	I0507 18:20:06.661140   11760 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0507 18:20:06.720080   11760 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0507 18:20:06.759921   11760 ssh_runner.go:195] Run: openssl version
	I0507 18:20:06.772923   11760 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0507 18:20:06.784908   11760 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0507 18:20:06.823288   11760 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0507 18:20:06.834438   11760 command_runner.go:130] > -rw-r--r-- 1 root root 1111 May  7 18:01 /usr/share/ca-certificates/minikubeCA.pem
	I0507 18:20:06.834438   11760 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  7 18:01 /usr/share/ca-certificates/minikubeCA.pem
	I0507 18:20:06.845428   11760 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0507 18:20:06.855929   11760 command_runner.go:130] > b5213941
	I0507 18:20:06.867812   11760 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0507 18:20:06.899566   11760 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9992.pem && ln -fs /usr/share/ca-certificates/9992.pem /etc/ssl/certs/9992.pem"
	I0507 18:20:06.933222   11760 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9992.pem
	I0507 18:20:06.940325   11760 command_runner.go:130] > -rw-r--r-- 1 root root 1338 May  7 18:15 /usr/share/ca-certificates/9992.pem
	I0507 18:20:06.940386   11760 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  7 18:15 /usr/share/ca-certificates/9992.pem
	I0507 18:20:06.952257   11760 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9992.pem
	I0507 18:20:06.962242   11760 command_runner.go:130] > 51391683
	I0507 18:20:06.972818   11760 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9992.pem /etc/ssl/certs/51391683.0"
	I0507 18:20:07.041859   11760 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/99922.pem && ln -fs /usr/share/ca-certificates/99922.pem /etc/ssl/certs/99922.pem"
	I0507 18:20:07.090592   11760 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/99922.pem
	I0507 18:20:07.103133   11760 command_runner.go:130] > -rw-r--r-- 1 root root 1708 May  7 18:15 /usr/share/ca-certificates/99922.pem
	I0507 18:20:07.103188   11760 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  7 18:15 /usr/share/ca-certificates/99922.pem
	I0507 18:20:07.113456   11760 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/99922.pem
	I0507 18:20:07.124509   11760 command_runner.go:130] > 3ec20f2e
	I0507 18:20:07.133196   11760 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/99922.pem /etc/ssl/certs/3ec20f2e.0"
	I0507 18:20:07.162221   11760 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0507 18:20:07.171758   11760 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0507 18:20:07.172746   11760 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0507 18:20:07.172746   11760 command_runner.go:130] > Device: 8,1	Inode: 5243214     Links: 1
	I0507 18:20:07.172746   11760 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0507 18:20:07.172800   11760 command_runner.go:130] > Access: 2024-05-07 18:18:07.321923736 +0000
	I0507 18:20:07.172800   11760 command_runner.go:130] > Modify: 2024-05-07 18:18:07.321923736 +0000
	I0507 18:20:07.172859   11760 command_runner.go:130] > Change: 2024-05-07 18:18:07.321923736 +0000
	I0507 18:20:07.172859   11760 command_runner.go:130] >  Birth: 2024-05-07 18:18:07.321923736 +0000
	I0507 18:20:07.181511   11760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0507 18:20:07.194251   11760 command_runner.go:130] > Certificate will not expire
	I0507 18:20:07.203386   11760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0507 18:20:07.212291   11760 command_runner.go:130] > Certificate will not expire
	I0507 18:20:07.222039   11760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0507 18:20:07.233396   11760 command_runner.go:130] > Certificate will not expire
	I0507 18:20:07.243046   11760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0507 18:20:07.254416   11760 command_runner.go:130] > Certificate will not expire
	I0507 18:20:07.263272   11760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0507 18:20:07.271179   11760 command_runner.go:130] > Certificate will not expire
	I0507 18:20:07.288228   11760 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0507 18:20:07.299094   11760 command_runner.go:130] > Certificate will not expire
	I0507 18:20:07.299446   11760 kubeadm.go:391] StartCluster: {Name:functional-527400 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
30.0 ClusterName:functional-527400 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.129.80 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L Moun
tGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0507 18:20:07.306078   11760 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0507 18:20:07.351857   11760 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0507 18:20:07.375678   11760 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0507 18:20:07.375717   11760 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0507 18:20:07.375764   11760 command_runner.go:130] > /var/lib/minikube/etcd:
	I0507 18:20:07.375764   11760 command_runner.go:130] > member
	W0507 18:20:07.375881   11760 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0507 18:20:07.375881   11760 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0507 18:20:07.375881   11760 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0507 18:20:07.384388   11760 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0507 18:20:07.409870   11760 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0507 18:20:07.410896   11760 kubeconfig.go:125] found "functional-527400" server: "https://172.19.129.80:8441"
	I0507 18:20:07.412002   11760 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0507 18:20:07.412694   11760 kapi.go:59] client config for functional-527400: &rest.Config{Host:"https://172.19.129.80:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-527400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-527400\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2655b00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0507 18:20:07.414024   11760 cert_rotation.go:137] Starting client certificate rotation controller
	I0507 18:20:07.422299   11760 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0507 18:20:07.444651   11760 kubeadm.go:624] The running cluster does not require reconfiguration: 172.19.129.80
	I0507 18:20:07.444651   11760 kubeadm.go:1154] stopping kube-system containers ...
	I0507 18:20:07.453232   11760 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0507 18:20:07.493302   11760 command_runner.go:130] > 805faa80aeb9
	I0507 18:20:07.493302   11760 command_runner.go:130] > 7c0d9498c652
	I0507 18:20:07.493302   11760 command_runner.go:130] > 835333bb04e9
	I0507 18:20:07.493302   11760 command_runner.go:130] > e12da0342bc8
	I0507 18:20:07.493302   11760 command_runner.go:130] > 98523c6db396
	I0507 18:20:07.493302   11760 command_runner.go:130] > 0972535768fa
	I0507 18:20:07.493302   11760 command_runner.go:130] > a8432805a72f
	I0507 18:20:07.493501   11760 command_runner.go:130] > fb89b64b69c2
	I0507 18:20:07.493501   11760 command_runner.go:130] > 870e72eb8926
	I0507 18:20:07.493501   11760 command_runner.go:130] > 8a54d5a8faae
	I0507 18:20:07.493568   11760 command_runner.go:130] > 0fae4a449988
	I0507 18:20:07.493568   11760 command_runner.go:130] > f9153313b8f0
	I0507 18:20:07.493590   11760 command_runner.go:130] > c856389fb36c
	I0507 18:20:07.493590   11760 command_runner.go:130] > 58a16bb29ab8
	I0507 18:20:07.493590   11760 command_runner.go:130] > 72ca7202eb24
	I0507 18:20:07.493627   11760 command_runner.go:130] > 22089ff5c733
	I0507 18:20:07.493627   11760 command_runner.go:130] > de3501470e3b
	I0507 18:20:07.493672   11760 command_runner.go:130] > ce31e86b89b9
	I0507 18:20:07.493672   11760 command_runner.go:130] > 5f142561cad9
	I0507 18:20:07.493672   11760 command_runner.go:130] > 0086b94b13dd
	I0507 18:20:07.493672   11760 command_runner.go:130] > a7bc3ce0e9ac
	I0507 18:20:07.493672   11760 command_runner.go:130] > 56c39ef7de5c
	I0507 18:20:07.493672   11760 command_runner.go:130] > 2615099519ba
	I0507 18:20:07.493672   11760 command_runner.go:130] > 86adb66c059e
	I0507 18:20:07.493672   11760 command_runner.go:130] > 185fc223a54b
	I0507 18:20:07.493805   11760 docker.go:483] Stopping containers: [805faa80aeb9 7c0d9498c652 835333bb04e9 e12da0342bc8 98523c6db396 0972535768fa a8432805a72f fb89b64b69c2 870e72eb8926 8a54d5a8faae 0fae4a449988 f9153313b8f0 c856389fb36c 58a16bb29ab8 72ca7202eb24 22089ff5c733 de3501470e3b ce31e86b89b9 5f142561cad9 0086b94b13dd a7bc3ce0e9ac 56c39ef7de5c 2615099519ba 86adb66c059e 185fc223a54b]
	I0507 18:20:07.502827   11760 ssh_runner.go:195] Run: docker stop 805faa80aeb9 7c0d9498c652 835333bb04e9 e12da0342bc8 98523c6db396 0972535768fa a8432805a72f fb89b64b69c2 870e72eb8926 8a54d5a8faae 0fae4a449988 f9153313b8f0 c856389fb36c 58a16bb29ab8 72ca7202eb24 22089ff5c733 de3501470e3b ce31e86b89b9 5f142561cad9 0086b94b13dd a7bc3ce0e9ac 56c39ef7de5c 2615099519ba 86adb66c059e 185fc223a54b
	I0507 18:20:08.405427   11760 command_runner.go:130] > 805faa80aeb9
	I0507 18:20:08.405519   11760 command_runner.go:130] > 7c0d9498c652
	I0507 18:20:08.405519   11760 command_runner.go:130] > 835333bb04e9
	I0507 18:20:08.405582   11760 command_runner.go:130] > e12da0342bc8
	I0507 18:20:08.405582   11760 command_runner.go:130] > 98523c6db396
	I0507 18:20:08.405582   11760 command_runner.go:130] > 0972535768fa
	I0507 18:20:08.405582   11760 command_runner.go:130] > a8432805a72f
	I0507 18:20:08.405582   11760 command_runner.go:130] > fb89b64b69c2
	I0507 18:20:08.405646   11760 command_runner.go:130] > 870e72eb8926
	I0507 18:20:08.405646   11760 command_runner.go:130] > 8a54d5a8faae
	I0507 18:20:08.405646   11760 command_runner.go:130] > 0fae4a449988
	I0507 18:20:08.405646   11760 command_runner.go:130] > f9153313b8f0
	I0507 18:20:08.405646   11760 command_runner.go:130] > c856389fb36c
	I0507 18:20:08.405646   11760 command_runner.go:130] > 58a16bb29ab8
	I0507 18:20:08.405716   11760 command_runner.go:130] > 72ca7202eb24
	I0507 18:20:08.405716   11760 command_runner.go:130] > 22089ff5c733
	I0507 18:20:08.405716   11760 command_runner.go:130] > de3501470e3b
	I0507 18:20:08.405804   11760 command_runner.go:130] > ce31e86b89b9
	I0507 18:20:08.405804   11760 command_runner.go:130] > 5f142561cad9
	I0507 18:20:08.405831   11760 command_runner.go:130] > 0086b94b13dd
	I0507 18:20:08.405831   11760 command_runner.go:130] > a7bc3ce0e9ac
	I0507 18:20:08.405831   11760 command_runner.go:130] > 56c39ef7de5c
	I0507 18:20:08.405912   11760 command_runner.go:130] > 2615099519ba
	I0507 18:20:08.405912   11760 command_runner.go:130] > 86adb66c059e
	I0507 18:20:08.405985   11760 command_runner.go:130] > 185fc223a54b
	I0507 18:20:08.416324   11760 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0507 18:20:08.494770   11760 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0507 18:20:08.514717   11760 command_runner.go:130] > -rw------- 1 root root 5647 May  7 18:18 /etc/kubernetes/admin.conf
	I0507 18:20:08.514717   11760 command_runner.go:130] > -rw------- 1 root root 5653 May  7 18:18 /etc/kubernetes/controller-manager.conf
	I0507 18:20:08.514717   11760 command_runner.go:130] > -rw------- 1 root root 2007 May  7 18:18 /etc/kubernetes/kubelet.conf
	I0507 18:20:08.514717   11760 command_runner.go:130] > -rw------- 1 root root 5601 May  7 18:18 /etc/kubernetes/scheduler.conf
	I0507 18:20:08.515726   11760 kubeadm.go:156] found existing configuration files:
	-rw------- 1 root root 5647 May  7 18:18 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5653 May  7 18:18 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2007 May  7 18:18 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5601 May  7 18:18 /etc/kubernetes/scheduler.conf
	
	I0507 18:20:08.523724   11760 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I0507 18:20:08.540385   11760 command_runner.go:130] >     server: https://control-plane.minikube.internal:8441
	I0507 18:20:08.548243   11760 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I0507 18:20:08.566225   11760 command_runner.go:130] >     server: https://control-plane.minikube.internal:8441
	I0507 18:20:08.575226   11760 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I0507 18:20:08.592515   11760 kubeadm.go:162] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0507 18:20:08.602418   11760 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0507 18:20:08.630781   11760 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I0507 18:20:08.646797   11760 kubeadm.go:162] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0507 18:20:08.655503   11760 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0507 18:20:08.684194   11760 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0507 18:20:08.699981   11760 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0507 18:20:08.788323   11760 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0507 18:20:08.788624   11760 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0507 18:20:08.788624   11760 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0507 18:20:08.788624   11760 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0507 18:20:08.788721   11760 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0507 18:20:08.788834   11760 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0507 18:20:08.788872   11760 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0507 18:20:08.788872   11760 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0507 18:20:08.788872   11760 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0507 18:20:08.788872   11760 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0507 18:20:08.788982   11760 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0507 18:20:08.789046   11760 command_runner.go:130] > [certs] Using the existing "sa" key
	I0507 18:20:08.789141   11760 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0507 18:20:10.119399   11760 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0507 18:20:10.119507   11760 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
	I0507 18:20:10.119507   11760 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/super-admin.conf"
	I0507 18:20:10.119507   11760 command_runner.go:130] > [kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/kubelet.conf"
	I0507 18:20:10.119507   11760 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0507 18:20:10.119583   11760 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0507 18:20:10.119583   11760 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.3303508s)
	I0507 18:20:10.119659   11760 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0507 18:20:10.395913   11760 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0507 18:20:10.396868   11760 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0507 18:20:10.396868   11760 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0507 18:20:10.396868   11760 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0507 18:20:10.479578   11760 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0507 18:20:10.479650   11760 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0507 18:20:10.479650   11760 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0507 18:20:10.479650   11760 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0507 18:20:10.479787   11760 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0507 18:20:10.584602   11760 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0507 18:20:10.584710   11760 api_server.go:52] waiting for apiserver process to appear ...
	I0507 18:20:10.595939   11760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0507 18:20:11.099953   11760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0507 18:20:11.607792   11760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0507 18:20:12.101277   11760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0507 18:20:12.148361   11760 command_runner.go:130] > 5610
	I0507 18:20:12.148361   11760 api_server.go:72] duration metric: took 1.5635433s to wait for apiserver process to appear ...
	I0507 18:20:12.148361   11760 api_server.go:88] waiting for apiserver healthz status ...
	I0507 18:20:12.148361   11760 api_server.go:253] Checking apiserver healthz at https://172.19.129.80:8441/healthz ...
	I0507 18:20:14.878475   11760 api_server.go:279] https://172.19.129.80:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0507 18:20:14.878592   11760 api_server.go:103] status: https://172.19.129.80:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0507 18:20:14.878592   11760 api_server.go:253] Checking apiserver healthz at https://172.19.129.80:8441/healthz ...
	I0507 18:20:14.937186   11760 api_server.go:279] https://172.19.129.80:8441/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0507 18:20:14.937186   11760 api_server.go:103] status: https://172.19.129.80:8441/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0507 18:20:15.155001   11760 api_server.go:253] Checking apiserver healthz at https://172.19.129.80:8441/healthz ...
	I0507 18:20:15.164899   11760 api_server.go:279] https://172.19.129.80:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0507 18:20:15.164899   11760 api_server.go:103] status: https://172.19.129.80:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0507 18:20:15.657663   11760 api_server.go:253] Checking apiserver healthz at https://172.19.129.80:8441/healthz ...
	I0507 18:20:15.665796   11760 api_server.go:279] https://172.19.129.80:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0507 18:20:15.665796   11760 api_server.go:103] status: https://172.19.129.80:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0507 18:20:16.162864   11760 api_server.go:253] Checking apiserver healthz at https://172.19.129.80:8441/healthz ...
	I0507 18:20:16.174541   11760 api_server.go:279] https://172.19.129.80:8441/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0507 18:20:16.174603   11760 api_server.go:103] status: https://172.19.129.80:8441/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0507 18:20:16.656079   11760 api_server.go:253] Checking apiserver healthz at https://172.19.129.80:8441/healthz ...
	I0507 18:20:16.663042   11760 api_server.go:279] https://172.19.129.80:8441/healthz returned 200:
	ok
	I0507 18:20:16.663644   11760 round_trippers.go:463] GET https://172.19.129.80:8441/version
	I0507 18:20:16.663644   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:16.663719   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:16.663719   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:16.672557   11760 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0507 18:20:16.672557   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:16.672557   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:16.672557   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:16.672557   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:16.672557   11760 round_trippers.go:580]     Content-Length: 263
	I0507 18:20:16.672557   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:16 GMT
	I0507 18:20:16.672557   11760 round_trippers.go:580]     Audit-Id: 0f5828d2-6ce9-4d8f-b609-e2b3330ab80b
	I0507 18:20:16.672557   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:16.672557   11760 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.0",
	  "gitCommit": "7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a",
	  "gitTreeState": "clean",
	  "buildDate": "2024-04-17T17:27:03Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0507 18:20:16.672557   11760 api_server.go:141] control plane version: v1.30.0
	I0507 18:20:16.672557   11760 api_server.go:131] duration metric: took 4.5238849s to wait for apiserver health ...
	I0507 18:20:16.672557   11760 cni.go:84] Creating CNI manager for ""
	I0507 18:20:16.672557   11760 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0507 18:20:16.678561   11760 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0507 18:20:16.689393   11760 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0507 18:20:16.720243   11760 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0507 18:20:16.767075   11760 system_pods.go:43] waiting for kube-system pods to appear ...
	I0507 18:20:16.767326   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/namespaces/kube-system/pods
	I0507 18:20:16.767326   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:16.767326   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:16.767326   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:16.775112   11760 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0507 18:20:16.775112   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:16.775112   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:16.775112   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:16.775112   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:17 GMT
	I0507 18:20:16.775112   11760 round_trippers.go:580]     Audit-Id: fe2d9cba-fd23-48ab-9b9f-d4fadea7ff02
	I0507 18:20:16.775112   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:16.775112   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:16.776106   11760 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"505"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-6b5v9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4925e3cc-31d5-477c-9966-4d533ba939a8","resourceVersion":"500","creationTimestamp":"2024-05-07T18:18:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"33493d02-30c0-46f7-b452-9489bb38d0ba","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T18:18:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"33493d02-30c0-46f7-b452-9489bb38d0ba\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 52446 chars]
	I0507 18:20:16.780101   11760 system_pods.go:59] 7 kube-system pods found
	I0507 18:20:16.780101   11760 system_pods.go:61] "coredns-7db6d8ff4d-6b5v9" [4925e3cc-31d5-477c-9966-4d533ba939a8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0507 18:20:16.780101   11760 system_pods.go:61] "etcd-functional-527400" [9abcd377-8ba5-4666-afc7-fdb3f2a84083] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0507 18:20:16.780101   11760 system_pods.go:61] "kube-apiserver-functional-527400" [c4a7dba1-d1fe-49d4-bb75-72415782e9c2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0507 18:20:16.780101   11760 system_pods.go:61] "kube-controller-manager-functional-527400" [3a4e6083-ef54-4e5f-b89d-51823bc2999b] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0507 18:20:16.780101   11760 system_pods.go:61] "kube-proxy-9lf2q" [728dcb3a-0eb1-45b5-92a6-35c6819af3bf] Running
	I0507 18:20:16.780101   11760 system_pods.go:61] "kube-scheduler-functional-527400" [12cb2956-7c05-444a-ae86-a409f3c4f7b5] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0507 18:20:16.780101   11760 system_pods.go:61] "storage-provisioner" [514d12a0-9694-41b7-9ed5-5ae68ad0a037] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0507 18:20:16.780101   11760 system_pods.go:74] duration metric: took 12.9368ms to wait for pod list to return data ...
	I0507 18:20:16.780101   11760 node_conditions.go:102] verifying NodePressure condition ...
	I0507 18:20:16.780101   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/nodes
	I0507 18:20:16.780101   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:16.780101   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:16.780101   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:16.788122   11760 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0507 18:20:16.788122   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:16.788122   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:16.788122   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:16.788122   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:16.788122   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:16.788122   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:17 GMT
	I0507 18:20:16.788816   11760 round_trippers.go:580]     Audit-Id: b996e81a-5923-4efd-be55-6cfa1f1217d9
	I0507 18:20:16.788935   11760 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"507"},"items":[{"metadata":{"name":"functional-527400","uid":"1f8009c3-5065-4e6e-94e6-3fbe2fdf4d26","resourceVersion":"493","creationTimestamp":"2024-05-07T18:18:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-527400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"functional-527400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T18_18_19_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedF
ields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","ti [truncated 4840 chars]
	I0507 18:20:16.789767   11760 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0507 18:20:16.789767   11760 node_conditions.go:123] node cpu capacity is 2
	I0507 18:20:16.789822   11760 node_conditions.go:105] duration metric: took 9.7202ms to run NodePressure ...
	I0507 18:20:16.789878   11760 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0507 18:20:17.340719   11760 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0507 18:20:17.340841   11760 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0507 18:20:17.340907   11760 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0507 18:20:17.341217   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0507 18:20:17.341268   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:17.341268   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:17.341268   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:17.346244   11760 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:20:17.346244   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:17.346244   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:17 GMT
	I0507 18:20:17.346244   11760 round_trippers.go:580]     Audit-Id: 9f6d4ee0-b8d4-4d4b-92ab-8c42707e1dd9
	I0507 18:20:17.346244   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:17.346244   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:17.346244   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:17.346244   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:17.347184   11760 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"550"},"items":[{"metadata":{"name":"etcd-functional-527400","namespace":"kube-system","uid":"9abcd377-8ba5-4666-afc7-fdb3f2a84083","resourceVersion":"496","creationTimestamp":"2024-05-07T18:18:18Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.129.80:2379","kubernetes.io/config.hash":"6a8912a658474f6abf27cdfaacc14627","kubernetes.io/config.mirror":"6a8912a658474f6abf27cdfaacc14627","kubernetes.io/config.seen":"2024-05-07T18:18:18.603000853Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-527400","uid":"1f8009c3-5065-4e6e-94e6-3fbe2fdf4d26","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-07T18:18:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotatio
ns":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f: [truncated 31933 chars]
	I0507 18:20:17.349149   11760 kubeadm.go:733] kubelet initialised
	I0507 18:20:17.349149   11760 kubeadm.go:734] duration metric: took 8.1472ms waiting for restarted kubelet to initialise ...
	I0507 18:20:17.349215   11760 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0507 18:20:17.349300   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/namespaces/kube-system/pods
	I0507 18:20:17.349300   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:17.349388   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:17.349388   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:17.359766   11760 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0507 18:20:17.359766   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:17.359766   11760 round_trippers.go:580]     Audit-Id: c2a74fd7-a1ae-486e-88d1-f933b4c76809
	I0507 18:20:17.359766   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:17.359766   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:17.359766   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:17.359766   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:17.359766   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:17 GMT
	I0507 18:20:17.361456   11760 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"551"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-6b5v9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4925e3cc-31d5-477c-9966-4d533ba939a8","resourceVersion":"500","creationTimestamp":"2024-05-07T18:18:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"33493d02-30c0-46f7-b452-9489bb38d0ba","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T18:18:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"33493d02-30c0-46f7-b452-9489bb38d0ba\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 52446 chars]
	I0507 18:20:17.363775   11760 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-6b5v9" in "kube-system" namespace to be "Ready" ...
	I0507 18:20:17.363775   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-6b5v9
	I0507 18:20:17.363775   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:17.363775   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:17.363775   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:17.366785   11760 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:20:17.366785   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:17.366785   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:17.366785   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:17.366785   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:17 GMT
	I0507 18:20:17.366785   11760 round_trippers.go:580]     Audit-Id: 7263bcc2-9187-4eab-aac7-88439cc9971a
	I0507 18:20:17.366785   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:17.366785   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:17.367316   11760 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-6b5v9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4925e3cc-31d5-477c-9966-4d533ba939a8","resourceVersion":"500","creationTimestamp":"2024-05-07T18:18:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"33493d02-30c0-46f7-b452-9489bb38d0ba","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T18:18:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"33493d02-30c0-46f7-b452-9489bb38d0ba\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6639 chars]
	I0507 18:20:17.367912   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/nodes/functional-527400
	I0507 18:20:17.367994   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:17.367994   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:17.367994   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:17.372777   11760 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:20:17.372777   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:17.372777   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:17 GMT
	I0507 18:20:17.373674   11760 round_trippers.go:580]     Audit-Id: 6450a01a-b59d-4613-bb97-f09aa8fe2f75
	I0507 18:20:17.373674   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:17.373674   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:17.373674   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:17.373674   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:17.374028   11760 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-527400","uid":"1f8009c3-5065-4e6e-94e6-3fbe2fdf4d26","resourceVersion":"493","creationTimestamp":"2024-05-07T18:18:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-527400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"functional-527400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T18_18_19_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-07T18:18:15Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0507 18:20:17.873486   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-6b5v9
	I0507 18:20:17.873574   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:17.873574   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:17.873574   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:17.878937   11760 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 18:20:17.878937   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:17.879030   11760 round_trippers.go:580]     Audit-Id: accd1f72-4b96-4382-b1ce-b8dfca8144a3
	I0507 18:20:17.879030   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:17.879082   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:17.879127   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:17.879127   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:17.879127   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:18 GMT
	I0507 18:20:17.879398   11760 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-6b5v9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4925e3cc-31d5-477c-9966-4d533ba939a8","resourceVersion":"559","creationTimestamp":"2024-05-07T18:18:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"33493d02-30c0-46f7-b452-9489bb38d0ba","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T18:18:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"33493d02-30c0-46f7-b452-9489bb38d0ba\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6815 chars]
	I0507 18:20:17.880551   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/nodes/functional-527400
	I0507 18:20:17.880599   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:17.880599   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:17.880665   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:17.883479   11760 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 18:20:17.883479   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:17.883479   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:17.883479   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:17.883540   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:18 GMT
	I0507 18:20:17.883540   11760 round_trippers.go:580]     Audit-Id: 804f5ea5-aaf1-4073-a389-e0e0570a8e5d
	I0507 18:20:17.883540   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:17.883540   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:17.884352   11760 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-527400","uid":"1f8009c3-5065-4e6e-94e6-3fbe2fdf4d26","resourceVersion":"493","creationTimestamp":"2024-05-07T18:18:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-527400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"functional-527400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T18_18_19_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-07T18:18:15Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0507 18:20:18.375205   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-6b5v9
	I0507 18:20:18.375316   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:18.375316   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:18.375316   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:18.379018   11760 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:20:18.379551   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:18.379551   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:18 GMT
	I0507 18:20:18.379551   11760 round_trippers.go:580]     Audit-Id: d20bdf6b-e5e3-49c8-8d61-1c40ae7c1499
	I0507 18:20:18.379643   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:18.379643   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:18.379643   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:18.379643   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:18.379991   11760 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-6b5v9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4925e3cc-31d5-477c-9966-4d533ba939a8","resourceVersion":"559","creationTimestamp":"2024-05-07T18:18:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"33493d02-30c0-46f7-b452-9489bb38d0ba","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T18:18:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"33493d02-30c0-46f7-b452-9489bb38d0ba\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6815 chars]
	I0507 18:20:18.381054   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/nodes/functional-527400
	I0507 18:20:18.381133   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:18.381133   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:18.381200   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:18.384550   11760 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:20:18.384550   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:18.384550   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:18.384550   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:18 GMT
	I0507 18:20:18.384850   11760 round_trippers.go:580]     Audit-Id: 0345ae8c-2029-4950-8aa8-9b73223a00b6
	I0507 18:20:18.384971   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:18.384971   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:18.384971   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:18.385414   11760 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-527400","uid":"1f8009c3-5065-4e6e-94e6-3fbe2fdf4d26","resourceVersion":"493","creationTimestamp":"2024-05-07T18:18:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-527400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"functional-527400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T18_18_19_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-07T18:18:15Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0507 18:20:18.875342   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-6b5v9
	I0507 18:20:18.875605   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:18.875605   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:18.875605   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:18.882889   11760 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0507 18:20:18.882889   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:18.882889   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:18.882889   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:18.882889   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:18.882889   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:18.882889   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:19 GMT
	I0507 18:20:18.882889   11760 round_trippers.go:580]     Audit-Id: 2d2f5729-0e5f-4b80-9161-ae43215b441c
	I0507 18:20:18.883781   11760 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-6b5v9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4925e3cc-31d5-477c-9966-4d533ba939a8","resourceVersion":"559","creationTimestamp":"2024-05-07T18:18:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"33493d02-30c0-46f7-b452-9489bb38d0ba","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T18:18:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"33493d02-30c0-46f7-b452-9489bb38d0ba\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6815 chars]
	I0507 18:20:18.884462   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/nodes/functional-527400
	I0507 18:20:18.884492   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:18.884492   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:18.884492   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:18.887675   11760 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:20:18.887743   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:18.887823   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:18.887823   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:18.887823   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:18.887823   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:19 GMT
	I0507 18:20:18.887823   11760 round_trippers.go:580]     Audit-Id: f69a1294-5f84-4184-bdba-3e0f8242941a
	I0507 18:20:18.887881   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:18.888351   11760 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-527400","uid":"1f8009c3-5065-4e6e-94e6-3fbe2fdf4d26","resourceVersion":"493","creationTimestamp":"2024-05-07T18:18:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-527400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"functional-527400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T18_18_19_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-07T18:18:15Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0507 18:20:19.374238   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-6b5v9
	I0507 18:20:19.374238   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:19.374238   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:19.374238   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:19.377834   11760 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:20:19.377834   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:19.378260   11760 round_trippers.go:580]     Audit-Id: dabe729e-4b7d-45bf-a37a-2423c4c191b5
	I0507 18:20:19.378260   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:19.378260   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:19.378260   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:19.378260   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:19.378260   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:19 GMT
	I0507 18:20:19.378431   11760 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-6b5v9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4925e3cc-31d5-477c-9966-4d533ba939a8","resourceVersion":"559","creationTimestamp":"2024-05-07T18:18:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"33493d02-30c0-46f7-b452-9489bb38d0ba","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T18:18:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"33493d02-30c0-46f7-b452-9489bb38d0ba\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6815 chars]
	I0507 18:20:19.379084   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/nodes/functional-527400
	I0507 18:20:19.379167   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:19.379167   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:19.379167   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:19.381971   11760 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 18:20:19.382145   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:19.382145   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:19.382145   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:19.382145   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:19.382145   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:19.382145   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:19 GMT
	I0507 18:20:19.382145   11760 round_trippers.go:580]     Audit-Id: 25688e70-fd2d-40be-bb14-7448e0e10de3
	I0507 18:20:19.386018   11760 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-527400","uid":"1f8009c3-5065-4e6e-94e6-3fbe2fdf4d26","resourceVersion":"493","creationTimestamp":"2024-05-07T18:18:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-527400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"functional-527400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T18_18_19_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-07T18:18:15Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0507 18:20:19.386590   11760 pod_ready.go:102] pod "coredns-7db6d8ff4d-6b5v9" in "kube-system" namespace has status "Ready":"False"
	I0507 18:20:19.871306   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-6b5v9
	I0507 18:20:19.871571   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:19.871571   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:19.871571   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:19.874902   11760 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:20:19.874902   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:19.874902   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:20 GMT
	I0507 18:20:19.874902   11760 round_trippers.go:580]     Audit-Id: 76ed292f-b6e6-4097-9df9-3cf1b389c058
	I0507 18:20:19.874902   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:19.875167   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:19.875167   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:19.875167   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:19.875437   11760 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-6b5v9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4925e3cc-31d5-477c-9966-4d533ba939a8","resourceVersion":"559","creationTimestamp":"2024-05-07T18:18:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"33493d02-30c0-46f7-b452-9489bb38d0ba","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T18:18:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"33493d02-30c0-46f7-b452-9489bb38d0ba\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6815 chars]
	I0507 18:20:19.876520   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/nodes/functional-527400
	I0507 18:20:19.876520   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:19.876520   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:19.876520   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:19.881580   11760 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:20:19.881580   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:19.881580   11760 round_trippers.go:580]     Audit-Id: 2ecc37e8-6883-418c-b363-f0a161869367
	I0507 18:20:19.881580   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:19.881580   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:19.881580   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:19.881580   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:19.881580   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:20 GMT
	I0507 18:20:19.881580   11760 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-527400","uid":"1f8009c3-5065-4e6e-94e6-3fbe2fdf4d26","resourceVersion":"493","creationTimestamp":"2024-05-07T18:18:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-527400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"functional-527400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T18_18_19_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-07T18:18:15Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0507 18:20:20.368763   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-6b5v9
	I0507 18:20:20.368763   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:20.368763   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:20.368763   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:20.373536   11760 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:20:20.373536   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:20.373536   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:20 GMT
	I0507 18:20:20.373536   11760 round_trippers.go:580]     Audit-Id: 8cfa461b-a1ca-4566-a885-8856a2ee4ea2
	I0507 18:20:20.373536   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:20.373536   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:20.373883   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:20.373883   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:20.374106   11760 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-6b5v9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4925e3cc-31d5-477c-9966-4d533ba939a8","resourceVersion":"559","creationTimestamp":"2024-05-07T18:18:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"33493d02-30c0-46f7-b452-9489bb38d0ba","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T18:18:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"33493d02-30c0-46f7-b452-9489bb38d0ba\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6815 chars]
	I0507 18:20:20.374885   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/nodes/functional-527400
	I0507 18:20:20.374885   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:20.374944   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:20.374944   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:20.380290   11760 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 18:20:20.380491   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:20.380491   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:20.380491   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:20 GMT
	I0507 18:20:20.380491   11760 round_trippers.go:580]     Audit-Id: 5709f297-6539-447e-9b5d-44efff9f4707
	I0507 18:20:20.380491   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:20.380491   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:20.380491   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:20.380491   11760 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-527400","uid":"1f8009c3-5065-4e6e-94e6-3fbe2fdf4d26","resourceVersion":"493","creationTimestamp":"2024-05-07T18:18:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-527400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"functional-527400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T18_18_19_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-07T18:18:15Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0507 18:20:20.871074   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-6b5v9
	I0507 18:20:20.871074   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:20.871074   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:20.871074   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:20.874335   11760 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:20:20.874335   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:20.875271   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:20.875271   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:21 GMT
	I0507 18:20:20.875314   11760 round_trippers.go:580]     Audit-Id: 9e18de12-098d-469f-9544-b216ea24287f
	I0507 18:20:20.875314   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:20.875314   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:20.875314   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:20.875369   11760 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-6b5v9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4925e3cc-31d5-477c-9966-4d533ba939a8","resourceVersion":"559","creationTimestamp":"2024-05-07T18:18:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"33493d02-30c0-46f7-b452-9489bb38d0ba","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T18:18:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"33493d02-30c0-46f7-b452-9489bb38d0ba\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6815 chars]
	I0507 18:20:20.876579   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/nodes/functional-527400
	I0507 18:20:20.876579   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:20.876721   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:20.876721   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:20.879765   11760 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:20:20.879765   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:20.879765   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:20.879765   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:20.879765   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:21 GMT
	I0507 18:20:20.879765   11760 round_trippers.go:580]     Audit-Id: 36ab523a-26a3-4310-8857-00a0971d7b99
	I0507 18:20:20.879765   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:20.879904   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:20.880177   11760 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-527400","uid":"1f8009c3-5065-4e6e-94e6-3fbe2fdf4d26","resourceVersion":"493","creationTimestamp":"2024-05-07T18:18:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-527400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"functional-527400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T18_18_19_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-07T18:18:15Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0507 18:20:21.369288   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-6b5v9
	I0507 18:20:21.369288   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:21.369288   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:21.369288   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:21.372729   11760 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:20:21.372729   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:21.372729   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:21.372729   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:21.372729   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:21 GMT
	I0507 18:20:21.372729   11760 round_trippers.go:580]     Audit-Id: 08236abe-3d04-44a1-899a-a927a2caf9b3
	I0507 18:20:21.372729   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:21.372729   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:21.373814   11760 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-6b5v9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4925e3cc-31d5-477c-9966-4d533ba939a8","resourceVersion":"559","creationTimestamp":"2024-05-07T18:18:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"33493d02-30c0-46f7-b452-9489bb38d0ba","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T18:18:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"33493d02-30c0-46f7-b452-9489bb38d0ba\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6815 chars]
	I0507 18:20:21.374494   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/nodes/functional-527400
	I0507 18:20:21.374494   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:21.374494   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:21.374494   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:21.377086   11760 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 18:20:21.377086   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:21.377527   11760 round_trippers.go:580]     Audit-Id: e598555f-8687-4ef6-99c3-90cc9f5e43b5
	I0507 18:20:21.377527   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:21.377527   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:21.377527   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:21.377527   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:21.377527   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:21 GMT
	I0507 18:20:21.377720   11760 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-527400","uid":"1f8009c3-5065-4e6e-94e6-3fbe2fdf4d26","resourceVersion":"493","creationTimestamp":"2024-05-07T18:18:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-527400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"functional-527400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T18_18_19_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-07T18:18:15Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0507 18:20:21.869176   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-6b5v9
	I0507 18:20:21.869287   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:21.869287   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:21.869287   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:21.873115   11760 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:20:21.874112   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:21.874157   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:22 GMT
	I0507 18:20:21.874157   11760 round_trippers.go:580]     Audit-Id: 7fc33051-d28b-4ece-b559-1e6b5c062ea2
	I0507 18:20:21.874157   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:21.874157   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:21.874157   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:21.874157   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:21.874885   11760 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-6b5v9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4925e3cc-31d5-477c-9966-4d533ba939a8","resourceVersion":"559","creationTimestamp":"2024-05-07T18:18:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"33493d02-30c0-46f7-b452-9489bb38d0ba","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T18:18:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"33493d02-30c0-46f7-b452-9489bb38d0ba\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6815 chars]
	I0507 18:20:21.875307   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/nodes/functional-527400
	I0507 18:20:21.875307   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:21.875307   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:21.875307   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:21.880749   11760 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 18:20:21.880749   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:21.880749   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:21.880749   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:21.880863   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:21.880863   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:22 GMT
	I0507 18:20:21.880863   11760 round_trippers.go:580]     Audit-Id: 46a00fb9-0a39-4681-a66c-f517cc8dda85
	I0507 18:20:21.880863   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:21.881132   11760 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-527400","uid":"1f8009c3-5065-4e6e-94e6-3fbe2fdf4d26","resourceVersion":"493","creationTimestamp":"2024-05-07T18:18:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-527400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"functional-527400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T18_18_19_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-07T18:18:15Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0507 18:20:21.881479   11760 pod_ready.go:102] pod "coredns-7db6d8ff4d-6b5v9" in "kube-system" namespace has status "Ready":"False"
	I0507 18:20:22.368567   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-6b5v9
	I0507 18:20:22.368652   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:22.368652   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:22.368652   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:22.371564   11760 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 18:20:22.372568   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:22.372568   11760 round_trippers.go:580]     Audit-Id: 5b093aaf-65f5-427f-8ccd-2480191d12c1
	I0507 18:20:22.372568   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:22.372568   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:22.372568   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:22.372568   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:22.372568   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:22 GMT
	I0507 18:20:22.372568   11760 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-6b5v9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4925e3cc-31d5-477c-9966-4d533ba939a8","resourceVersion":"559","creationTimestamp":"2024-05-07T18:18:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"33493d02-30c0-46f7-b452-9489bb38d0ba","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T18:18:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"33493d02-30c0-46f7-b452-9489bb38d0ba\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6815 chars]
	I0507 18:20:22.373836   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/nodes/functional-527400
	I0507 18:20:22.373922   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:22.373922   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:22.373922   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:22.377423   11760 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:20:22.377502   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:22.377574   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:22.377574   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:22.377574   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:22.377574   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:22.377574   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:22 GMT
	I0507 18:20:22.377574   11760 round_trippers.go:580]     Audit-Id: 8c89746c-9934-4a20-8c96-4095ce9d8347
	I0507 18:20:22.378037   11760 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-527400","uid":"1f8009c3-5065-4e6e-94e6-3fbe2fdf4d26","resourceVersion":"493","creationTimestamp":"2024-05-07T18:18:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-527400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"functional-527400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T18_18_19_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-07T18:18:15Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0507 18:20:22.867392   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-6b5v9
	I0507 18:20:22.867392   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:22.867392   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:22.867392   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:22.870971   11760 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:20:22.870971   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:22.870971   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:22.870971   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:23 GMT
	I0507 18:20:22.871114   11760 round_trippers.go:580]     Audit-Id: c32a75bc-3d9e-45e0-b5df-185b449989e6
	I0507 18:20:22.871114   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:22.871114   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:22.871114   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:22.871325   11760 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-6b5v9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4925e3cc-31d5-477c-9966-4d533ba939a8","resourceVersion":"559","creationTimestamp":"2024-05-07T18:18:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"33493d02-30c0-46f7-b452-9489bb38d0ba","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T18:18:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"33493d02-30c0-46f7-b452-9489bb38d0ba\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6815 chars]
	I0507 18:20:22.872237   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/nodes/functional-527400
	I0507 18:20:22.872237   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:22.872348   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:22.872348   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:22.874515   11760 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 18:20:22.874515   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:22.874515   11760 round_trippers.go:580]     Audit-Id: a88a18ab-5b0b-4aa4-ad0f-d9cd55f7c8cb
	I0507 18:20:22.874515   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:22.874515   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:22.874515   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:22.874515   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:22.874515   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:23 GMT
	I0507 18:20:22.875583   11760 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-527400","uid":"1f8009c3-5065-4e6e-94e6-3fbe2fdf4d26","resourceVersion":"493","creationTimestamp":"2024-05-07T18:18:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-527400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"functional-527400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T18_18_19_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-07T18:18:15Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0507 18:20:23.367008   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-6b5v9
	I0507 18:20:23.367008   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:23.367098   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:23.367098   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:23.372439   11760 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 18:20:23.372675   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:23.372675   11760 round_trippers.go:580]     Audit-Id: 737188b9-06de-4dc4-9379-a0ea3c36b06a
	I0507 18:20:23.372675   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:23.372791   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:23.372791   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:23.372791   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:23.372791   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:23 GMT
	I0507 18:20:23.373002   11760 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-6b5v9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4925e3cc-31d5-477c-9966-4d533ba939a8","resourceVersion":"559","creationTimestamp":"2024-05-07T18:18:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"33493d02-30c0-46f7-b452-9489bb38d0ba","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T18:18:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"33493d02-30c0-46f7-b452-9489bb38d0ba\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6815 chars]
	I0507 18:20:23.373675   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/nodes/functional-527400
	I0507 18:20:23.373764   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:23.373764   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:23.373764   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:23.376990   11760 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:20:23.376990   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:23.376990   11760 round_trippers.go:580]     Audit-Id: 92ba7e3d-cb03-4e5d-a3c1-69d7d0476e03
	I0507 18:20:23.376990   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:23.376990   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:23.376990   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:23.376990   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:23.376990   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:23 GMT
	I0507 18:20:23.377370   11760 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-527400","uid":"1f8009c3-5065-4e6e-94e6-3fbe2fdf4d26","resourceVersion":"493","creationTimestamp":"2024-05-07T18:18:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-527400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"functional-527400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T18_18_19_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-07T18:18:15Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0507 18:20:23.873399   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-6b5v9
	I0507 18:20:23.873460   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:23.873521   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:23.873521   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:23.876799   11760 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:20:23.876799   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:23.876799   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:23.876799   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:23.877646   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:23.877646   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:23.877646   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:24 GMT
	I0507 18:20:23.877646   11760 round_trippers.go:580]     Audit-Id: f02f0a96-df29-423e-ae62-33ae1a4c459d
	I0507 18:20:23.877799   11760 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-6b5v9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4925e3cc-31d5-477c-9966-4d533ba939a8","resourceVersion":"559","creationTimestamp":"2024-05-07T18:18:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"33493d02-30c0-46f7-b452-9489bb38d0ba","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T18:18:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"33493d02-30c0-46f7-b452-9489bb38d0ba\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6815 chars]
	I0507 18:20:23.878662   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/nodes/functional-527400
	I0507 18:20:23.878662   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:23.878662   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:23.878662   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:23.885149   11760 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0507 18:20:23.885149   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:23.885149   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:24 GMT
	I0507 18:20:23.885149   11760 round_trippers.go:580]     Audit-Id: 9eca2424-3c32-4c8d-9361-0ace72d60cc4
	I0507 18:20:23.885149   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:23.885149   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:23.885149   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:23.885149   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:23.885149   11760 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-527400","uid":"1f8009c3-5065-4e6e-94e6-3fbe2fdf4d26","resourceVersion":"493","creationTimestamp":"2024-05-07T18:18:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-527400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"functional-527400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T18_18_19_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-07T18:18:15Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0507 18:20:23.886469   11760 pod_ready.go:102] pod "coredns-7db6d8ff4d-6b5v9" in "kube-system" namespace has status "Ready":"False"
	I0507 18:20:24.370487   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-6b5v9
	I0507 18:20:24.370487   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:24.370487   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:24.370487   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:24.373803   11760 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:20:24.373803   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:24.373803   11760 round_trippers.go:580]     Audit-Id: 7695f6d9-34e1-4811-9326-b4ffe6b35388
	I0507 18:20:24.373803   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:24.373803   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:24.373803   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:24.373803   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:24.373803   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:24 GMT
	I0507 18:20:24.374948   11760 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-6b5v9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4925e3cc-31d5-477c-9966-4d533ba939a8","resourceVersion":"559","creationTimestamp":"2024-05-07T18:18:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"33493d02-30c0-46f7-b452-9489bb38d0ba","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T18:18:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"33493d02-30c0-46f7-b452-9489bb38d0ba\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6815 chars]
	I0507 18:20:24.375536   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/nodes/functional-527400
	I0507 18:20:24.375637   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:24.375637   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:24.375637   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:24.377782   11760 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 18:20:24.377782   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:24.377782   11760 round_trippers.go:580]     Audit-Id: f12a2394-6608-4bab-b352-fe91b6b3d9c1
	I0507 18:20:24.377782   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:24.377782   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:24.377782   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:24.377782   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:24.377782   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:24 GMT
	I0507 18:20:24.378868   11760 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-527400","uid":"1f8009c3-5065-4e6e-94e6-3fbe2fdf4d26","resourceVersion":"493","creationTimestamp":"2024-05-07T18:18:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-527400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"functional-527400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T18_18_19_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-07T18:18:15Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0507 18:20:24.870695   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-6b5v9
	I0507 18:20:24.870695   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:24.870695   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:24.870695   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:24.874495   11760 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:20:24.874495   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:24.874582   11760 round_trippers.go:580]     Audit-Id: 1cdaa1ed-7ea0-4f55-aacd-30aeeec772e9
	I0507 18:20:24.874582   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:24.874582   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:24.874582   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:24.874582   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:24.874582   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:25 GMT
	I0507 18:20:24.874961   11760 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-6b5v9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4925e3cc-31d5-477c-9966-4d533ba939a8","resourceVersion":"564","creationTimestamp":"2024-05-07T18:18:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"33493d02-30c0-46f7-b452-9489bb38d0ba","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T18:18:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"33493d02-30c0-46f7-b452-9489bb38d0ba\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6586 chars]
	I0507 18:20:24.875982   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/nodes/functional-527400
	I0507 18:20:24.876049   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:24.876049   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:24.876049   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:24.882291   11760 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0507 18:20:24.882488   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:24.882488   11760 round_trippers.go:580]     Audit-Id: d6df3adb-08bb-4289-80b7-ebfc9275ea0c
	I0507 18:20:24.882488   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:24.882488   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:24.882488   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:24.882488   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:24.882488   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:25 GMT
	I0507 18:20:24.882488   11760 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-527400","uid":"1f8009c3-5065-4e6e-94e6-3fbe2fdf4d26","resourceVersion":"493","creationTimestamp":"2024-05-07T18:18:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-527400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"functional-527400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T18_18_19_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-07T18:18:15Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0507 18:20:24.882488   11760 pod_ready.go:92] pod "coredns-7db6d8ff4d-6b5v9" in "kube-system" namespace has status "Ready":"True"
	I0507 18:20:24.882488   11760 pod_ready.go:81] duration metric: took 7.5181957s for pod "coredns-7db6d8ff4d-6b5v9" in "kube-system" namespace to be "Ready" ...
	I0507 18:20:24.882488   11760 pod_ready.go:78] waiting up to 4m0s for pod "etcd-functional-527400" in "kube-system" namespace to be "Ready" ...
	I0507 18:20:24.882488   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/namespaces/kube-system/pods/etcd-functional-527400
	I0507 18:20:24.882488   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:24.882488   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:24.882488   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:24.886544   11760 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:20:24.886544   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:24.886544   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:24.886544   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:24.886544   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:24.886544   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:25 GMT
	I0507 18:20:24.886544   11760 round_trippers.go:580]     Audit-Id: f06dcdcd-6b37-4ead-a6e8-ccf6756a58b1
	I0507 18:20:24.886544   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:24.887709   11760 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-527400","namespace":"kube-system","uid":"9abcd377-8ba5-4666-afc7-fdb3f2a84083","resourceVersion":"563","creationTimestamp":"2024-05-07T18:18:18Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.129.80:2379","kubernetes.io/config.hash":"6a8912a658474f6abf27cdfaacc14627","kubernetes.io/config.mirror":"6a8912a658474f6abf27cdfaacc14627","kubernetes.io/config.seen":"2024-05-07T18:18:18.603000853Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-527400","uid":"1f8009c3-5065-4e6e-94e6-3fbe2fdf4d26","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-07T18:18:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6621 chars]
	I0507 18:20:24.888705   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/nodes/functional-527400
	I0507 18:20:24.888705   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:24.888705   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:24.888705   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:24.891547   11760 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 18:20:24.891547   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:24.891547   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:24.891547   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:25 GMT
	I0507 18:20:24.891547   11760 round_trippers.go:580]     Audit-Id: e409946f-dbd5-4ca5-9071-d4fbcd22beb2
	I0507 18:20:24.891547   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:24.891547   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:24.891547   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:24.891939   11760 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-527400","uid":"1f8009c3-5065-4e6e-94e6-3fbe2fdf4d26","resourceVersion":"493","creationTimestamp":"2024-05-07T18:18:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-527400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"functional-527400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T18_18_19_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-07T18:18:15Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0507 18:20:24.891939   11760 pod_ready.go:92] pod "etcd-functional-527400" in "kube-system" namespace has status "Ready":"True"
	I0507 18:20:24.891939   11760 pod_ready.go:81] duration metric: took 9.4507ms for pod "etcd-functional-527400" in "kube-system" namespace to be "Ready" ...
	I0507 18:20:24.891939   11760 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-functional-527400" in "kube-system" namespace to be "Ready" ...
	I0507 18:20:24.892522   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-527400
	I0507 18:20:24.892522   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:24.892522   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:24.892522   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:24.895104   11760 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 18:20:24.895104   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:24.895104   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:24.895104   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:24.895104   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:24.895104   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:24.895104   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:25 GMT
	I0507 18:20:24.895104   11760 round_trippers.go:580]     Audit-Id: 3254f229-37c1-4a14-badd-44261f019aa9
	I0507 18:20:24.895826   11760 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-527400","namespace":"kube-system","uid":"c4a7dba1-d1fe-49d4-bb75-72415782e9c2","resourceVersion":"494","creationTimestamp":"2024-05-07T18:18:18Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.129.80:8441","kubernetes.io/config.hash":"7e995d8ed0f8760a3d3056ba8a241ac8","kubernetes.io/config.mirror":"7e995d8ed0f8760a3d3056ba8a241ac8","kubernetes.io/config.seen":"2024-05-07T18:18:18.602995453Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-527400","uid":"1f8009c3-5065-4e6e-94e6-3fbe2fdf4d26","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-07T18:18:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 8398 chars]
	I0507 18:20:24.896419   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/nodes/functional-527400
	I0507 18:20:24.896419   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:24.896419   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:24.896483   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:24.898146   11760 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0507 18:20:24.898146   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:24.898146   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:24.898146   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:24.898146   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:24.898146   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:24.898146   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:25 GMT
	I0507 18:20:24.898146   11760 round_trippers.go:580]     Audit-Id: 1362d724-6ebf-4917-8311-0ceb47040764
	I0507 18:20:24.899050   11760 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-527400","uid":"1f8009c3-5065-4e6e-94e6-3fbe2fdf4d26","resourceVersion":"493","creationTimestamp":"2024-05-07T18:18:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-527400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"functional-527400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T18_18_19_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-07T18:18:15Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0507 18:20:25.401188   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-527400
	I0507 18:20:25.401188   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:25.401305   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:25.401305   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:25.411039   11760 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0507 18:20:25.411439   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:25.411439   11760 round_trippers.go:580]     Audit-Id: b91af063-e111-41ee-8ea8-96f1a9c028ee
	I0507 18:20:25.411439   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:25.411439   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:25.411439   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:25.411439   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:25.411439   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:25 GMT
	I0507 18:20:25.411710   11760 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-527400","namespace":"kube-system","uid":"c4a7dba1-d1fe-49d4-bb75-72415782e9c2","resourceVersion":"494","creationTimestamp":"2024-05-07T18:18:18Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.129.80:8441","kubernetes.io/config.hash":"7e995d8ed0f8760a3d3056ba8a241ac8","kubernetes.io/config.mirror":"7e995d8ed0f8760a3d3056ba8a241ac8","kubernetes.io/config.seen":"2024-05-07T18:18:18.602995453Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-527400","uid":"1f8009c3-5065-4e6e-94e6-3fbe2fdf4d26","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-07T18:18:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 8398 chars]
	I0507 18:20:25.412309   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/nodes/functional-527400
	I0507 18:20:25.412382   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:25.412382   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:25.412382   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:25.415096   11760 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 18:20:25.415538   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:25.415570   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:25.415570   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:25 GMT
	I0507 18:20:25.415570   11760 round_trippers.go:580]     Audit-Id: 13960127-a662-407e-97ce-1eca4e01bdc4
	I0507 18:20:25.415570   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:25.415570   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:25.415570   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:25.415570   11760 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-527400","uid":"1f8009c3-5065-4e6e-94e6-3fbe2fdf4d26","resourceVersion":"493","creationTimestamp":"2024-05-07T18:18:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-527400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"functional-527400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T18_18_19_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-07T18:18:15Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0507 18:20:25.896881   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-527400
	I0507 18:20:25.896881   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:25.896881   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:25.896881   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:25.900439   11760 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:20:25.900439   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:25.900439   11760 round_trippers.go:580]     Audit-Id: e061b086-92d2-4cff-878f-b4d646101b20
	I0507 18:20:25.900439   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:25.900439   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:25.900439   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:25.900439   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:25.900439   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:26 GMT
	I0507 18:20:25.901151   11760 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-527400","namespace":"kube-system","uid":"c4a7dba1-d1fe-49d4-bb75-72415782e9c2","resourceVersion":"494","creationTimestamp":"2024-05-07T18:18:18Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.129.80:8441","kubernetes.io/config.hash":"7e995d8ed0f8760a3d3056ba8a241ac8","kubernetes.io/config.mirror":"7e995d8ed0f8760a3d3056ba8a241ac8","kubernetes.io/config.seen":"2024-05-07T18:18:18.602995453Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-527400","uid":"1f8009c3-5065-4e6e-94e6-3fbe2fdf4d26","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-07T18:18:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 8398 chars]
	I0507 18:20:25.901877   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/nodes/functional-527400
	I0507 18:20:25.901954   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:25.901954   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:25.901954   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:25.905069   11760 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:20:25.905069   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:25.905069   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:26 GMT
	I0507 18:20:25.905069   11760 round_trippers.go:580]     Audit-Id: 822be6cd-f349-4150-ad42-e211141329f0
	I0507 18:20:25.905604   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:25.905604   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:25.905604   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:25.905604   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:25.905959   11760 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-527400","uid":"1f8009c3-5065-4e6e-94e6-3fbe2fdf4d26","resourceVersion":"493","creationTimestamp":"2024-05-07T18:18:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-527400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"functional-527400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T18_18_19_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-07T18:18:15Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0507 18:20:26.397412   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-527400
	I0507 18:20:26.397526   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:26.397526   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:26.397526   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:26.400031   11760 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 18:20:26.400031   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:26.400031   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:26.400031   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:26.400031   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:26.400031   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:26.400031   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:26 GMT
	I0507 18:20:26.400031   11760 round_trippers.go:580]     Audit-Id: 9970a9c4-777d-4b92-96b1-1b282536c0f8
	I0507 18:20:26.401248   11760 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-527400","namespace":"kube-system","uid":"c4a7dba1-d1fe-49d4-bb75-72415782e9c2","resourceVersion":"494","creationTimestamp":"2024-05-07T18:18:18Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.129.80:8441","kubernetes.io/config.hash":"7e995d8ed0f8760a3d3056ba8a241ac8","kubernetes.io/config.mirror":"7e995d8ed0f8760a3d3056ba8a241ac8","kubernetes.io/config.seen":"2024-05-07T18:18:18.602995453Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-527400","uid":"1f8009c3-5065-4e6e-94e6-3fbe2fdf4d26","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-07T18:18:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 8398 chars]
	I0507 18:20:26.401854   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/nodes/functional-527400
	I0507 18:20:26.401854   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:26.401854   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:26.401927   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:26.407275   11760 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 18:20:26.407275   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:26.407275   11760 round_trippers.go:580]     Audit-Id: 2e5bcf86-96bd-4334-8c0b-29bc08dcf619
	I0507 18:20:26.407275   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:26.407275   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:26.407275   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:26.407275   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:26.407275   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:26 GMT
	I0507 18:20:26.408475   11760 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-527400","uid":"1f8009c3-5065-4e6e-94e6-3fbe2fdf4d26","resourceVersion":"493","creationTimestamp":"2024-05-07T18:18:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-527400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"functional-527400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T18_18_19_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-07T18:18:15Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0507 18:20:26.898539   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-527400
	I0507 18:20:26.898629   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:26.898629   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:26.898629   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:26.902684   11760 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:20:26.902684   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:26.902684   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:26.902684   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:26.902684   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:26.902684   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:27 GMT
	I0507 18:20:26.902684   11760 round_trippers.go:580]     Audit-Id: 4881c874-3982-4837-84f2-aa0b788bed11
	I0507 18:20:26.902684   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:26.903353   11760 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-527400","namespace":"kube-system","uid":"c4a7dba1-d1fe-49d4-bb75-72415782e9c2","resourceVersion":"494","creationTimestamp":"2024-05-07T18:18:18Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.129.80:8441","kubernetes.io/config.hash":"7e995d8ed0f8760a3d3056ba8a241ac8","kubernetes.io/config.mirror":"7e995d8ed0f8760a3d3056ba8a241ac8","kubernetes.io/config.seen":"2024-05-07T18:18:18.602995453Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-527400","uid":"1f8009c3-5065-4e6e-94e6-3fbe2fdf4d26","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-07T18:18:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 8398 chars]
	I0507 18:20:26.904411   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/nodes/functional-527400
	I0507 18:20:26.904484   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:26.904484   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:26.904574   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:26.906952   11760 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 18:20:26.906952   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:26.906952   11760 round_trippers.go:580]     Audit-Id: 0f5d5d5c-ea5f-4d61-9aec-708cfb25f6b0
	I0507 18:20:26.906952   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:26.906952   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:26.906952   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:26.906952   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:26.906952   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:27 GMT
	I0507 18:20:26.907941   11760 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-527400","uid":"1f8009c3-5065-4e6e-94e6-3fbe2fdf4d26","resourceVersion":"493","creationTimestamp":"2024-05-07T18:18:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-527400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"functional-527400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T18_18_19_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-07T18:18:15Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0507 18:20:26.908352   11760 pod_ready.go:102] pod "kube-apiserver-functional-527400" in "kube-system" namespace has status "Ready":"False"
	I0507 18:20:27.400109   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-527400
	I0507 18:20:27.400351   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:27.400424   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:27.400452   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:27.405890   11760 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 18:20:27.405890   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:27.405890   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:27.405890   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:27.405890   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:27.405890   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:27.405890   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:27 GMT
	I0507 18:20:27.405890   11760 round_trippers.go:580]     Audit-Id: b4406667-caf7-4a19-97d3-3643ab40fe84
	I0507 18:20:27.406400   11760 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-527400","namespace":"kube-system","uid":"c4a7dba1-d1fe-49d4-bb75-72415782e9c2","resourceVersion":"494","creationTimestamp":"2024-05-07T18:18:18Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.129.80:8441","kubernetes.io/config.hash":"7e995d8ed0f8760a3d3056ba8a241ac8","kubernetes.io/config.mirror":"7e995d8ed0f8760a3d3056ba8a241ac8","kubernetes.io/config.seen":"2024-05-07T18:18:18.602995453Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-527400","uid":"1f8009c3-5065-4e6e-94e6-3fbe2fdf4d26","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-07T18:18:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 8398 chars]
	I0507 18:20:27.407022   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/nodes/functional-527400
	I0507 18:20:27.407182   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:27.407182   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:27.407182   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:27.410081   11760 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 18:20:27.410081   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:27.410081   11760 round_trippers.go:580]     Audit-Id: fbb85b5e-fa28-4bec-b452-94351d32f736
	I0507 18:20:27.410081   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:27.410081   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:27.410081   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:27.410081   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:27.410081   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:27 GMT
	I0507 18:20:27.410081   11760 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-527400","uid":"1f8009c3-5065-4e6e-94e6-3fbe2fdf4d26","resourceVersion":"493","creationTimestamp":"2024-05-07T18:18:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-527400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"functional-527400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T18_18_19_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-07T18:18:15Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0507 18:20:27.905150   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-527400
	I0507 18:20:27.905150   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:27.905150   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:27.905150   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:27.909039   11760 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:20:27.909039   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:27.909039   11760 round_trippers.go:580]     Audit-Id: e966769e-e57c-4928-be20-ded421bf04de
	I0507 18:20:27.909039   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:27.909039   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:27.909039   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:27.909039   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:27.909039   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:28 GMT
	I0507 18:20:27.909601   11760 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-527400","namespace":"kube-system","uid":"c4a7dba1-d1fe-49d4-bb75-72415782e9c2","resourceVersion":"494","creationTimestamp":"2024-05-07T18:18:18Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.129.80:8441","kubernetes.io/config.hash":"7e995d8ed0f8760a3d3056ba8a241ac8","kubernetes.io/config.mirror":"7e995d8ed0f8760a3d3056ba8a241ac8","kubernetes.io/config.seen":"2024-05-07T18:18:18.602995453Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-527400","uid":"1f8009c3-5065-4e6e-94e6-3fbe2fdf4d26","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-07T18:18:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 8398 chars]
	I0507 18:20:27.910436   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/nodes/functional-527400
	I0507 18:20:27.910436   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:27.910436   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:27.910436   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:27.912993   11760 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 18:20:27.913231   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:27.913231   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:27.913231   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:27.913231   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:27.913231   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:28 GMT
	I0507 18:20:27.913231   11760 round_trippers.go:580]     Audit-Id: 5c883226-020e-4061-89e0-bcd7c0efaa16
	I0507 18:20:27.913320   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:27.913560   11760 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-527400","uid":"1f8009c3-5065-4e6e-94e6-3fbe2fdf4d26","resourceVersion":"493","creationTimestamp":"2024-05-07T18:18:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-527400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"functional-527400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T18_18_19_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-07T18:18:15Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0507 18:20:28.395618   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-527400
	I0507 18:20:28.395618   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:28.395618   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:28.395618   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:28.399205   11760 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:20:28.399205   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:28.399603   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:28 GMT
	I0507 18:20:28.399603   11760 round_trippers.go:580]     Audit-Id: 26fcb4e3-8fba-4d43-b8a2-54acabc08157
	I0507 18:20:28.399603   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:28.399793   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:28.399793   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:28.399793   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:28.400290   11760 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-527400","namespace":"kube-system","uid":"c4a7dba1-d1fe-49d4-bb75-72415782e9c2","resourceVersion":"494","creationTimestamp":"2024-05-07T18:18:18Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.129.80:8441","kubernetes.io/config.hash":"7e995d8ed0f8760a3d3056ba8a241ac8","kubernetes.io/config.mirror":"7e995d8ed0f8760a3d3056ba8a241ac8","kubernetes.io/config.seen":"2024-05-07T18:18:18.602995453Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-527400","uid":"1f8009c3-5065-4e6e-94e6-3fbe2fdf4d26","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-07T18:18:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 8398 chars]
	I0507 18:20:28.401192   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/nodes/functional-527400
	I0507 18:20:28.401192   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:28.401192   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:28.401192   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:28.403811   11760 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 18:20:28.403811   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:28.403811   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:28.403811   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:28.403811   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:28 GMT
	I0507 18:20:28.403811   11760 round_trippers.go:580]     Audit-Id: 97ab3033-ae26-4c5b-92fc-52bcd182d20f
	I0507 18:20:28.403811   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:28.403811   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:28.404578   11760 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-527400","uid":"1f8009c3-5065-4e6e-94e6-3fbe2fdf4d26","resourceVersion":"493","creationTimestamp":"2024-05-07T18:18:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-527400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"functional-527400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T18_18_19_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-07T18:18:15Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0507 18:20:28.895223   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-527400
	I0507 18:20:28.895223   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:28.895223   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:28.895223   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:28.898805   11760 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:20:28.899323   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:28.899323   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:28.899323   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:29 GMT
	I0507 18:20:28.899323   11760 round_trippers.go:580]     Audit-Id: 57266e71-17ea-4bf5-bd44-4d72a78d650e
	I0507 18:20:28.899323   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:28.899414   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:28.899414   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:28.899553   11760 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-527400","namespace":"kube-system","uid":"c4a7dba1-d1fe-49d4-bb75-72415782e9c2","resourceVersion":"494","creationTimestamp":"2024-05-07T18:18:18Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.129.80:8441","kubernetes.io/config.hash":"7e995d8ed0f8760a3d3056ba8a241ac8","kubernetes.io/config.mirror":"7e995d8ed0f8760a3d3056ba8a241ac8","kubernetes.io/config.seen":"2024-05-07T18:18:18.602995453Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-527400","uid":"1f8009c3-5065-4e6e-94e6-3fbe2fdf4d26","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-07T18:18:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 8398 chars]
	I0507 18:20:28.900926   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/nodes/functional-527400
	I0507 18:20:28.901007   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:28.901007   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:28.901007   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:28.903821   11760 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 18:20:28.903821   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:28.903821   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:28.903821   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:28.903821   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:29 GMT
	I0507 18:20:28.903821   11760 round_trippers.go:580]     Audit-Id: 0cc249ac-0fac-414e-85b8-bc6001b02602
	I0507 18:20:28.903821   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:28.903821   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:28.904909   11760 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-527400","uid":"1f8009c3-5065-4e6e-94e6-3fbe2fdf4d26","resourceVersion":"493","creationTimestamp":"2024-05-07T18:18:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-527400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"functional-527400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T18_18_19_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-07T18:18:15Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0507 18:20:29.394309   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-527400
	I0507 18:20:29.394539   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:29.394539   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:29.394539   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:29.399748   11760 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 18:20:29.399748   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:29.400309   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:29.400309   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:29.400309   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:29.400356   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:29 GMT
	I0507 18:20:29.400356   11760 round_trippers.go:580]     Audit-Id: f5c39b7b-c9b4-40b8-9132-6460d8fdd146
	I0507 18:20:29.400356   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:29.400447   11760 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-527400","namespace":"kube-system","uid":"c4a7dba1-d1fe-49d4-bb75-72415782e9c2","resourceVersion":"575","creationTimestamp":"2024-05-07T18:18:18Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.129.80:8441","kubernetes.io/config.hash":"7e995d8ed0f8760a3d3056ba8a241ac8","kubernetes.io/config.mirror":"7e995d8ed0f8760a3d3056ba8a241ac8","kubernetes.io/config.seen":"2024-05-07T18:18:18.602995453Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-527400","uid":"1f8009c3-5065-4e6e-94e6-3fbe2fdf4d26","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-07T18:18:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 8397 chars]
	I0507 18:20:29.401161   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/nodes/functional-527400
	I0507 18:20:29.401161   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:29.401161   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:29.401161   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:29.405669   11760 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:20:29.405669   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:29.405669   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:29.405669   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:29.405669   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:29 GMT
	I0507 18:20:29.405669   11760 round_trippers.go:580]     Audit-Id: cec1c772-0d0e-4ecf-a536-c59ffc4427dd
	I0507 18:20:29.405669   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:29.405669   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:29.405669   11760 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-527400","uid":"1f8009c3-5065-4e6e-94e6-3fbe2fdf4d26","resourceVersion":"493","creationTimestamp":"2024-05-07T18:18:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-527400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"functional-527400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T18_18_19_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-07T18:18:15Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0507 18:20:29.405669   11760 pod_ready.go:102] pod "kube-apiserver-functional-527400" in "kube-system" namespace has status "Ready":"False"
	I0507 18:20:29.895539   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-527400
	I0507 18:20:29.895642   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:29.895744   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:29.895744   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:29.901237   11760 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 18:20:29.901237   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:29.901237   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:29.901237   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:30 GMT
	I0507 18:20:29.901237   11760 round_trippers.go:580]     Audit-Id: b2a4f489-6341-4fe4-84b2-916b6860167c
	I0507 18:20:29.901237   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:29.901237   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:29.901237   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:29.901891   11760 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-527400","namespace":"kube-system","uid":"c4a7dba1-d1fe-49d4-bb75-72415782e9c2","resourceVersion":"576","creationTimestamp":"2024-05-07T18:18:18Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.129.80:8441","kubernetes.io/config.hash":"7e995d8ed0f8760a3d3056ba8a241ac8","kubernetes.io/config.mirror":"7e995d8ed0f8760a3d3056ba8a241ac8","kubernetes.io/config.seen":"2024-05-07T18:18:18.602995453Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-527400","uid":"1f8009c3-5065-4e6e-94e6-3fbe2fdf4d26","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-07T18:18:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 8154 chars]
	I0507 18:20:29.902536   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/nodes/functional-527400
	I0507 18:20:29.902536   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:29.902536   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:29.902536   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:29.906318   11760 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:20:29.906318   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:29.906318   11760 round_trippers.go:580]     Audit-Id: 23ccd0b0-1ca3-449f-baa8-24100b077e5e
	I0507 18:20:29.906318   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:29.906318   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:29.906318   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:29.906318   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:29.906318   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:30 GMT
	I0507 18:20:29.906318   11760 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-527400","uid":"1f8009c3-5065-4e6e-94e6-3fbe2fdf4d26","resourceVersion":"493","creationTimestamp":"2024-05-07T18:18:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-527400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"functional-527400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T18_18_19_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-07T18:18:15Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0507 18:20:29.906318   11760 pod_ready.go:92] pod "kube-apiserver-functional-527400" in "kube-system" namespace has status "Ready":"True"
	I0507 18:20:29.906318   11760 pod_ready.go:81] duration metric: took 5.0140336s for pod "kube-apiserver-functional-527400" in "kube-system" namespace to be "Ready" ...
	I0507 18:20:29.906318   11760 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-functional-527400" in "kube-system" namespace to be "Ready" ...
	I0507 18:20:29.906318   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-527400
	I0507 18:20:29.906318   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:29.906318   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:29.906318   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:29.909520   11760 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:20:29.909520   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:29.909520   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:29.909520   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:29.909520   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:29.909520   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:30 GMT
	I0507 18:20:29.909520   11760 round_trippers.go:580]     Audit-Id: d77fc251-e418-4806-b5a1-dd107aff8277
	I0507 18:20:29.909520   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:29.910699   11760 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-527400","namespace":"kube-system","uid":"3a4e6083-ef54-4e5f-b89d-51823bc2999b","resourceVersion":"571","creationTimestamp":"2024-05-07T18:18:18Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f58ad332f34d29646b02ab6d1afdba59","kubernetes.io/config.mirror":"f58ad332f34d29646b02ab6d1afdba59","kubernetes.io/config.seen":"2024-05-07T18:18:18.602998853Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-527400","uid":"1f8009c3-5065-4e6e-94e6-3fbe2fdf4d26","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-07T18:18:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7698 chars]
	I0507 18:20:29.911341   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/nodes/functional-527400
	I0507 18:20:29.911341   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:29.911378   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:29.911378   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:29.913799   11760 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 18:20:29.913799   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:29.913799   11760 round_trippers.go:580]     Audit-Id: 3101d94d-a7a0-445c-8f8c-2ed7a7198d28
	I0507 18:20:29.913799   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:29.913799   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:29.913799   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:29.913799   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:29.913799   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:30 GMT
	I0507 18:20:29.913799   11760 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-527400","uid":"1f8009c3-5065-4e6e-94e6-3fbe2fdf4d26","resourceVersion":"493","creationTimestamp":"2024-05-07T18:18:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-527400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"functional-527400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T18_18_19_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-07T18:18:15Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0507 18:20:29.914801   11760 pod_ready.go:92] pod "kube-controller-manager-functional-527400" in "kube-system" namespace has status "Ready":"True"
	I0507 18:20:29.914801   11760 pod_ready.go:81] duration metric: took 8.4823ms for pod "kube-controller-manager-functional-527400" in "kube-system" namespace to be "Ready" ...
	I0507 18:20:29.914801   11760 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9lf2q" in "kube-system" namespace to be "Ready" ...
	I0507 18:20:29.914801   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/namespaces/kube-system/pods/kube-proxy-9lf2q
	I0507 18:20:29.914801   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:29.914801   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:29.914801   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:29.916849   11760 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 18:20:29.916849   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:29.916849   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:30 GMT
	I0507 18:20:29.916849   11760 round_trippers.go:580]     Audit-Id: 63a7255b-faa9-4356-bfa2-136461159a04
	I0507 18:20:29.916849   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:29.916849   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:29.916849   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:29.916849   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:29.917977   11760 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-9lf2q","generateName":"kube-proxy-","namespace":"kube-system","uid":"728dcb3a-0eb1-45b5-92a6-35c6819af3bf","resourceVersion":"503","creationTimestamp":"2024-05-07T18:18:32Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"5950ebc1-1c4a-485c-9979-957e4be7eecd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T18:18:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5950ebc1-1c4a-485c-9979-957e4be7eecd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6035 chars]
	I0507 18:20:29.918582   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/nodes/functional-527400
	I0507 18:20:29.918582   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:29.918582   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:29.918582   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:29.921776   11760 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:20:29.921776   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:29.921776   11760 round_trippers.go:580]     Audit-Id: 803e3c5c-22ef-4e4d-ba44-bb1793c0ff99
	I0507 18:20:29.921776   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:29.921776   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:29.921776   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:29.921873   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:29.921873   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:30 GMT
	I0507 18:20:29.922036   11760 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-527400","uid":"1f8009c3-5065-4e6e-94e6-3fbe2fdf4d26","resourceVersion":"493","creationTimestamp":"2024-05-07T18:18:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-527400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"functional-527400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T18_18_19_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-07T18:18:15Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0507 18:20:29.922036   11760 pod_ready.go:92] pod "kube-proxy-9lf2q" in "kube-system" namespace has status "Ready":"True"
	I0507 18:20:29.922036   11760 pod_ready.go:81] duration metric: took 7.2346ms for pod "kube-proxy-9lf2q" in "kube-system" namespace to be "Ready" ...
	I0507 18:20:29.922036   11760 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-functional-527400" in "kube-system" namespace to be "Ready" ...
	I0507 18:20:29.922036   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-527400
	I0507 18:20:29.922562   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:29.922603   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:29.922603   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:29.925200   11760 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 18:20:29.925200   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:29.925200   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:29.925200   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:29.925200   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:29.925200   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:29.925200   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:30 GMT
	I0507 18:20:29.925200   11760 round_trippers.go:580]     Audit-Id: 71abfae1-5bdf-4ab5-9249-a0fffdb53780
	I0507 18:20:29.925200   11760 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-527400","namespace":"kube-system","uid":"12cb2956-7c05-444a-ae86-a409f3c4f7b5","resourceVersion":"561","creationTimestamp":"2024-05-07T18:18:18Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1d37a6ceba39f554ffabe6fc019d79c9","kubernetes.io/config.mirror":"1d37a6ceba39f554ffabe6fc019d79c9","kubernetes.io/config.seen":"2024-05-07T18:18:18.602999953Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-527400","uid":"1f8009c3-5065-4e6e-94e6-3fbe2fdf4d26","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-07T18:18:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5454 chars]
	I0507 18:20:29.925747   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/nodes/functional-527400
	I0507 18:20:29.925747   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:29.925747   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:29.925747   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:29.927909   11760 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 18:20:29.927909   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:29.927909   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:29.927909   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:29.927909   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:29.927909   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:30 GMT
	I0507 18:20:29.927909   11760 round_trippers.go:580]     Audit-Id: 7075a768-2a62-4af9-8d6b-d79bbd74da5b
	I0507 18:20:29.927909   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:29.928573   11760 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-527400","uid":"1f8009c3-5065-4e6e-94e6-3fbe2fdf4d26","resourceVersion":"493","creationTimestamp":"2024-05-07T18:18:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-527400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"functional-527400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T18_18_19_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-07T18:18:15Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0507 18:20:29.928778   11760 pod_ready.go:92] pod "kube-scheduler-functional-527400" in "kube-system" namespace has status "Ready":"True"
	I0507 18:20:29.928778   11760 pod_ready.go:81] duration metric: took 6.7417ms for pod "kube-scheduler-functional-527400" in "kube-system" namespace to be "Ready" ...
	I0507 18:20:29.928778   11760 pod_ready.go:38] duration metric: took 12.5786975s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0507 18:20:29.928778   11760 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0507 18:20:29.944869   11760 command_runner.go:130] > -16
	I0507 18:20:29.944869   11760 ops.go:34] apiserver oom_adj: -16
	I0507 18:20:29.944869   11760 kubeadm.go:591] duration metric: took 22.5674354s to restartPrimaryControlPlane
	I0507 18:20:29.944869   11760 kubeadm.go:393] duration metric: took 22.6438649s to StartCluster
	I0507 18:20:29.945702   11760 settings.go:142] acquiring lock: {Name:mk66ab2e0bae08b477c4ed9caa26e688e6ce3248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 18:20:29.945822   11760 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0507 18:20:29.946955   11760 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 18:20:29.948152   11760 start.go:234] Will wait 6m0s for node &{Name: IP:172.19.129.80 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0507 18:20:29.950974   11760 out.go:177] * Verifying Kubernetes components...
	I0507 18:20:29.948152   11760 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0507 18:20:29.948152   11760 config.go:182] Loaded profile config "functional-527400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 18:20:29.953953   11760 addons.go:69] Setting storage-provisioner=true in profile "functional-527400"
	I0507 18:20:29.953953   11760 addons.go:69] Setting default-storageclass=true in profile "functional-527400"
	I0507 18:20:29.953953   11760 addons.go:234] Setting addon storage-provisioner=true in "functional-527400"
	W0507 18:20:29.953953   11760 addons.go:243] addon storage-provisioner should already be in state true
	I0507 18:20:29.953953   11760 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "functional-527400"
	I0507 18:20:29.953953   11760 host.go:66] Checking if "functional-527400" exists ...
	I0507 18:20:29.955250   11760 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-527400 ).state
	I0507 18:20:29.958666   11760 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-527400 ).state
	I0507 18:20:29.968099   11760 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 18:20:30.236023   11760 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0507 18:20:30.262771   11760 node_ready.go:35] waiting up to 6m0s for node "functional-527400" to be "Ready" ...
	I0507 18:20:30.262771   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/nodes/functional-527400
	I0507 18:20:30.262771   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:30.262771   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:30.262771   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:30.266493   11760 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:20:30.266958   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:30.266958   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:30.266958   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:30.266958   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:30.266958   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:30 GMT
	I0507 18:20:30.266958   11760 round_trippers.go:580]     Audit-Id: b4304c46-7076-4ba6-9861-94b761a91d6c
	I0507 18:20:30.266958   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:30.267631   11760 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-527400","uid":"1f8009c3-5065-4e6e-94e6-3fbe2fdf4d26","resourceVersion":"493","creationTimestamp":"2024-05-07T18:18:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-527400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"functional-527400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T18_18_19_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-07T18:18:15Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0507 18:20:30.268572   11760 node_ready.go:49] node "functional-527400" has status "Ready":"True"
	I0507 18:20:30.268572   11760 node_ready.go:38] duration metric: took 5.801ms for node "functional-527400" to be "Ready" ...
	I0507 18:20:30.268572   11760 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0507 18:20:30.268745   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/namespaces/kube-system/pods
	I0507 18:20:30.268745   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:30.268811   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:30.268811   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:30.276153   11760 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0507 18:20:30.276153   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:30.276153   11760 round_trippers.go:580]     Audit-Id: 30ed7925-2000-4755-bfdf-8692f8c30de6
	I0507 18:20:30.276153   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:30.276153   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:30.276153   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:30.276153   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:30.276153   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:30 GMT
	I0507 18:20:30.277155   11760 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"576"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-6b5v9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4925e3cc-31d5-477c-9966-4d533ba939a8","resourceVersion":"564","creationTimestamp":"2024-05-07T18:18:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"33493d02-30c0-46f7-b452-9489bb38d0ba","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T18:18:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"33493d02-30c0-46f7-b452-9489bb38d0ba\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 51210 chars]
	I0507 18:20:30.279919   11760 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-6b5v9" in "kube-system" namespace to be "Ready" ...
	I0507 18:20:30.280077   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-6b5v9
	I0507 18:20:30.280077   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:30.280077   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:30.280077   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:30.282718   11760 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 18:20:30.282718   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:30.282718   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:30.282718   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:30 GMT
	I0507 18:20:30.282718   11760 round_trippers.go:580]     Audit-Id: f6f26e5b-c309-4d9f-a121-b9e33ade900a
	I0507 18:20:30.282718   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:30.282718   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:30.282718   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:30.282718   11760 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-6b5v9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4925e3cc-31d5-477c-9966-4d533ba939a8","resourceVersion":"564","creationTimestamp":"2024-05-07T18:18:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"33493d02-30c0-46f7-b452-9489bb38d0ba","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T18:18:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"33493d02-30c0-46f7-b452-9489bb38d0ba\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6586 chars]
	I0507 18:20:30.305431   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/nodes/functional-527400
	I0507 18:20:30.305431   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:30.305431   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:30.305637   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:30.308826   11760 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:20:30.308916   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:30.308916   11760 round_trippers.go:580]     Audit-Id: 11158865-14a1-418d-86f8-8c3b87fb8972
	I0507 18:20:30.308916   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:30.308916   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:30.308916   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:30.308916   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:30.308916   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:30 GMT
	I0507 18:20:30.309389   11760 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-527400","uid":"1f8009c3-5065-4e6e-94e6-3fbe2fdf4d26","resourceVersion":"493","creationTimestamp":"2024-05-07T18:18:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-527400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"functional-527400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T18_18_19_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-07T18:18:15Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0507 18:20:30.309808   11760 pod_ready.go:92] pod "coredns-7db6d8ff4d-6b5v9" in "kube-system" namespace has status "Ready":"True"
	I0507 18:20:30.309808   11760 pod_ready.go:81] duration metric: took 29.8877ms for pod "coredns-7db6d8ff4d-6b5v9" in "kube-system" namespace to be "Ready" ...
	I0507 18:20:30.309808   11760 pod_ready.go:78] waiting up to 6m0s for pod "etcd-functional-527400" in "kube-system" namespace to be "Ready" ...
	I0507 18:20:30.495832   11760 request.go:629] Waited for 186.0113ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.129.80:8441/api/v1/namespaces/kube-system/pods/etcd-functional-527400
	I0507 18:20:30.496146   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/namespaces/kube-system/pods/etcd-functional-527400
	I0507 18:20:30.496146   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:30.496146   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:30.496146   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:30.500061   11760 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:20:30.500836   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:30.500836   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:30.500836   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:30.500836   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:30.500836   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:30 GMT
	I0507 18:20:30.500901   11760 round_trippers.go:580]     Audit-Id: aa8b886c-c010-4553-95b4-6f031acf36e9
	I0507 18:20:30.500901   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:30.500977   11760 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-functional-527400","namespace":"kube-system","uid":"9abcd377-8ba5-4666-afc7-fdb3f2a84083","resourceVersion":"563","creationTimestamp":"2024-05-07T18:18:18Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.129.80:2379","kubernetes.io/config.hash":"6a8912a658474f6abf27cdfaacc14627","kubernetes.io/config.mirror":"6a8912a658474f6abf27cdfaacc14627","kubernetes.io/config.seen":"2024-05-07T18:18:18.603000853Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-527400","uid":"1f8009c3-5065-4e6e-94e6-3fbe2fdf4d26","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-07T18:18:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise
-client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/conf [truncated 6621 chars]
	I0507 18:20:30.702961   11760 request.go:629] Waited for 200.8811ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.129.80:8441/api/v1/nodes/functional-527400
	I0507 18:20:30.702961   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/nodes/functional-527400
	I0507 18:20:30.702961   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:30.702961   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:30.702961   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:30.706181   11760 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:20:30.706579   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:30.706579   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:30.706579   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:30 GMT
	I0507 18:20:30.706579   11760 round_trippers.go:580]     Audit-Id: 85160a24-c960-4472-85d8-c16cae3a0903
	I0507 18:20:30.706579   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:30.706664   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:30.706664   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:30.706664   11760 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-527400","uid":"1f8009c3-5065-4e6e-94e6-3fbe2fdf4d26","resourceVersion":"493","creationTimestamp":"2024-05-07T18:18:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-527400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"functional-527400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T18_18_19_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-07T18:18:15Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0507 18:20:30.707295   11760 pod_ready.go:92] pod "etcd-functional-527400" in "kube-system" namespace has status "Ready":"True"
	I0507 18:20:30.707295   11760 pod_ready.go:81] duration metric: took 397.4595ms for pod "etcd-functional-527400" in "kube-system" namespace to be "Ready" ...
	I0507 18:20:30.707295   11760 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-functional-527400" in "kube-system" namespace to be "Ready" ...
	I0507 18:20:30.895876   11760 request.go:629] Waited for 188.3958ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.129.80:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-527400
	I0507 18:20:30.896160   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/namespaces/kube-system/pods/kube-apiserver-functional-527400
	I0507 18:20:30.896160   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:30.896160   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:30.896160   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:30.900097   11760 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:20:30.900968   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:30.900968   11760 round_trippers.go:580]     Audit-Id: 6ab13391-b999-441c-8330-1c75e4288a00
	I0507 18:20:30.900968   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:30.900968   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:30.900968   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:30.900968   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:30.900968   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:31 GMT
	I0507 18:20:30.901360   11760 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-functional-527400","namespace":"kube-system","uid":"c4a7dba1-d1fe-49d4-bb75-72415782e9c2","resourceVersion":"576","creationTimestamp":"2024-05-07T18:18:18Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.129.80:8441","kubernetes.io/config.hash":"7e995d8ed0f8760a3d3056ba8a241ac8","kubernetes.io/config.mirror":"7e995d8ed0f8760a3d3056ba8a241ac8","kubernetes.io/config.seen":"2024-05-07T18:18:18.602995453Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-527400","uid":"1f8009c3-5065-4e6e-94e6-3fbe2fdf4d26","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-07T18:18:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.k
ubernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernete [truncated 8154 chars]
	I0507 18:20:31.099929   11760 request.go:629] Waited for 197.8667ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.129.80:8441/api/v1/nodes/functional-527400
	I0507 18:20:31.099929   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/nodes/functional-527400
	I0507 18:20:31.099929   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:31.099929   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:31.099929   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:31.103519   11760 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:20:31.103519   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:31.103519   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:31.103519   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:31.103519   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:31.103519   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:31 GMT
	I0507 18:20:31.104520   11760 round_trippers.go:580]     Audit-Id: 2d2e5e7a-2cb6-49d7-8f12-f9e33f1c4d71
	I0507 18:20:31.104520   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:31.104520   11760 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-527400","uid":"1f8009c3-5065-4e6e-94e6-3fbe2fdf4d26","resourceVersion":"493","creationTimestamp":"2024-05-07T18:18:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-527400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"functional-527400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T18_18_19_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-07T18:18:15Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0507 18:20:31.105131   11760 pod_ready.go:92] pod "kube-apiserver-functional-527400" in "kube-system" namespace has status "Ready":"True"
	I0507 18:20:31.105131   11760 pod_ready.go:81] duration metric: took 397.8089ms for pod "kube-apiserver-functional-527400" in "kube-system" namespace to be "Ready" ...
	I0507 18:20:31.105131   11760 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-functional-527400" in "kube-system" namespace to be "Ready" ...
	I0507 18:20:31.305889   11760 request.go:629] Waited for 200.5383ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.129.80:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-527400
	I0507 18:20:31.306065   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/namespaces/kube-system/pods/kube-controller-manager-functional-527400
	I0507 18:20:31.306065   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:31.306065   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:31.306065   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:31.309655   11760 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:20:31.309655   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:31.309655   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:31 GMT
	I0507 18:20:31.309655   11760 round_trippers.go:580]     Audit-Id: 0ff11e38-65f3-4c19-bf65-d086657dc4ed
	I0507 18:20:31.309655   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:31.310109   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:31.310109   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:31.310109   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:31.310409   11760 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-functional-527400","namespace":"kube-system","uid":"3a4e6083-ef54-4e5f-b89d-51823bc2999b","resourceVersion":"571","creationTimestamp":"2024-05-07T18:18:18Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f58ad332f34d29646b02ab6d1afdba59","kubernetes.io/config.mirror":"f58ad332f34d29646b02ab6d1afdba59","kubernetes.io/config.seen":"2024-05-07T18:18:18.602998853Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-527400","uid":"1f8009c3-5065-4e6e-94e6-3fbe2fdf4d26","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-07T18:18:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes
.io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{"." [truncated 7698 chars]
	I0507 18:20:31.495601   11760 request.go:629] Waited for 184.4206ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.129.80:8441/api/v1/nodes/functional-527400
	I0507 18:20:31.495601   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/nodes/functional-527400
	I0507 18:20:31.495601   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:31.495838   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:31.495838   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:31.499473   11760 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:20:31.499473   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:31.499473   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:31.499473   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:31.499473   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:31.499473   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:31 GMT
	I0507 18:20:31.499473   11760 round_trippers.go:580]     Audit-Id: 2185f581-778c-4668-8873-45aa9314a1d6
	I0507 18:20:31.499473   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:31.500181   11760 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-527400","uid":"1f8009c3-5065-4e6e-94e6-3fbe2fdf4d26","resourceVersion":"493","creationTimestamp":"2024-05-07T18:18:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-527400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"functional-527400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T18_18_19_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-07T18:18:15Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0507 18:20:31.500181   11760 pod_ready.go:92] pod "kube-controller-manager-functional-527400" in "kube-system" namespace has status "Ready":"True"
	I0507 18:20:31.500181   11760 pod_ready.go:81] duration metric: took 395.022ms for pod "kube-controller-manager-functional-527400" in "kube-system" namespace to be "Ready" ...
	I0507 18:20:31.500181   11760 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9lf2q" in "kube-system" namespace to be "Ready" ...
	I0507 18:20:31.701420   11760 request.go:629] Waited for 200.4835ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.129.80:8441/api/v1/namespaces/kube-system/pods/kube-proxy-9lf2q
	I0507 18:20:31.701511   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/namespaces/kube-system/pods/kube-proxy-9lf2q
	I0507 18:20:31.701511   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:31.701511   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:31.701511   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:31.705516   11760 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:20:31.705516   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:31.705516   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:31.705516   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:31.705516   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:31.705516   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:31.705516   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:31 GMT
	I0507 18:20:31.705516   11760 round_trippers.go:580]     Audit-Id: 3a131a2e-1116-4500-bd13-ea6b8488bad5
	I0507 18:20:31.705516   11760 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-9lf2q","generateName":"kube-proxy-","namespace":"kube-system","uid":"728dcb3a-0eb1-45b5-92a6-35c6819af3bf","resourceVersion":"503","creationTimestamp":"2024-05-07T18:18:32Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"5950ebc1-1c4a-485c-9979-957e4be7eecd","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T18:18:32Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"5950ebc1-1c4a-485c-9979-957e4be7eecd\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 6035 chars]
	I0507 18:20:31.907957   11760 request.go:629] Waited for 201.7186ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.129.80:8441/api/v1/nodes/functional-527400
	I0507 18:20:31.907957   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/nodes/functional-527400
	I0507 18:20:31.907957   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:31.907957   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:31.907957   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:31.911664   11760 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:20:31.911664   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:31.911664   11760 round_trippers.go:580]     Audit-Id: ff14f3fb-5974-41e5-9956-072fae4567c4
	I0507 18:20:31.911664   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:31.911664   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:31.911664   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:31.911664   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:31.911664   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:32 GMT
	I0507 18:20:31.912272   11760 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-527400","uid":"1f8009c3-5065-4e6e-94e6-3fbe2fdf4d26","resourceVersion":"493","creationTimestamp":"2024-05-07T18:18:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-527400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"functional-527400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T18_18_19_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-07T18:18:15Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0507 18:20:31.912726   11760 pod_ready.go:92] pod "kube-proxy-9lf2q" in "kube-system" namespace has status "Ready":"True"
	I0507 18:20:31.912804   11760 pod_ready.go:81] duration metric: took 412.5166ms for pod "kube-proxy-9lf2q" in "kube-system" namespace to be "Ready" ...
	I0507 18:20:31.912804   11760 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-functional-527400" in "kube-system" namespace to be "Ready" ...
	I0507 18:20:31.944001   11760 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:20:31.944001   11760 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:20:31.944971   11760 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0507 18:20:31.946059   11760 kapi.go:59] client config for functional-527400: &rest.Config{Host:"https://172.19.129.80:8441", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-527400\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\functional-527400\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2655b00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0507 18:20:31.946671   11760 addons.go:234] Setting addon default-storageclass=true in "functional-527400"
	W0507 18:20:31.946671   11760 addons.go:243] addon default-storageclass should already be in state true
	I0507 18:20:31.946671   11760 host.go:66] Checking if "functional-527400" exists ...
	I0507 18:20:31.948222   11760 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-527400 ).state
	I0507 18:20:31.965517   11760 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:20:31.965517   11760 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:20:31.971143   11760 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0507 18:20:31.973552   11760 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0507 18:20:31.973552   11760 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0507 18:20:31.973552   11760 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-527400 ).state
	I0507 18:20:32.096392   11760 request.go:629] Waited for 183.4684ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.129.80:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-527400
	I0507 18:20:32.096624   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/namespaces/kube-system/pods/kube-scheduler-functional-527400
	I0507 18:20:32.096741   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:32.096776   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:32.096776   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:32.102636   11760 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 18:20:32.102636   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:32.102636   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:32.102636   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:32.102636   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:32 GMT
	I0507 18:20:32.102636   11760 round_trippers.go:580]     Audit-Id: b5feb904-e0d0-450a-9882-c3b23be83b81
	I0507 18:20:32.102636   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:32.102636   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:32.103175   11760 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-functional-527400","namespace":"kube-system","uid":"12cb2956-7c05-444a-ae86-a409f3c4f7b5","resourceVersion":"561","creationTimestamp":"2024-05-07T18:18:18Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"1d37a6ceba39f554ffabe6fc019d79c9","kubernetes.io/config.mirror":"1d37a6ceba39f554ffabe6fc019d79c9","kubernetes.io/config.seen":"2024-05-07T18:18:18.602999953Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"functional-527400","uid":"1f8009c3-5065-4e6e-94e6-3fbe2fdf4d26","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-07T18:18:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{
},"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component": [truncated 5454 chars]
	I0507 18:20:32.303352   11760 request.go:629] Waited for 199.4786ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.129.80:8441/api/v1/nodes/functional-527400
	I0507 18:20:32.303617   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/nodes/functional-527400
	I0507 18:20:32.303681   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:32.303681   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:32.303743   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:32.307097   11760 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:20:32.307097   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:32.307503   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:32 GMT
	I0507 18:20:32.307503   11760 round_trippers.go:580]     Audit-Id: ef5a09d2-9459-40c6-9995-af39de538250
	I0507 18:20:32.307503   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:32.307503   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:32.307503   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:32.307503   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:32.307802   11760 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"functional-527400","uid":"1f8009c3-5065-4e6e-94e6-3fbe2fdf4d26","resourceVersion":"493","creationTimestamp":"2024-05-07T18:18:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-527400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"functional-527400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T18_18_19_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","
apiVersion":"v1","time":"2024-05-07T18:18:15Z","fieldsType":"FieldsV1", [truncated 4787 chars]
	I0507 18:20:32.308786   11760 pod_ready.go:92] pod "kube-scheduler-functional-527400" in "kube-system" namespace has status "Ready":"True"
	I0507 18:20:32.308786   11760 pod_ready.go:81] duration metric: took 395.9556ms for pod "kube-scheduler-functional-527400" in "kube-system" namespace to be "Ready" ...
	I0507 18:20:32.308786   11760 pod_ready.go:38] duration metric: took 2.0399916s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0507 18:20:32.308786   11760 api_server.go:52] waiting for apiserver process to appear ...
	I0507 18:20:32.322111   11760 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0507 18:20:32.351623   11760 command_runner.go:130] > 5610
	I0507 18:20:32.351623   11760 api_server.go:72] duration metric: took 2.4033056s to wait for apiserver process to appear ...
	I0507 18:20:32.351623   11760 api_server.go:88] waiting for apiserver healthz status ...
	I0507 18:20:32.351623   11760 api_server.go:253] Checking apiserver healthz at https://172.19.129.80:8441/healthz ...
	I0507 18:20:32.362176   11760 api_server.go:279] https://172.19.129.80:8441/healthz returned 200:
	ok
	I0507 18:20:32.362176   11760 round_trippers.go:463] GET https://172.19.129.80:8441/version
	I0507 18:20:32.362176   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:32.362351   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:32.362351   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:32.362881   11760 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0507 18:20:32.363877   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:32.363877   11760 round_trippers.go:580]     Audit-Id: a73c4402-ba7a-4a82-8b70-f734c5e83052
	I0507 18:20:32.363945   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:32.363945   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:32.363945   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:32.363994   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:32.363994   11760 round_trippers.go:580]     Content-Length: 263
	I0507 18:20:32.363994   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:32 GMT
	I0507 18:20:32.364054   11760 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.0",
	  "gitCommit": "7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a",
	  "gitTreeState": "clean",
	  "buildDate": "2024-04-17T17:27:03Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0507 18:20:32.364155   11760 api_server.go:141] control plane version: v1.30.0
	I0507 18:20:32.364155   11760 api_server.go:131] duration metric: took 12.5303ms to wait for apiserver health ...
	I0507 18:20:32.364155   11760 system_pods.go:43] waiting for kube-system pods to appear ...
	I0507 18:20:32.508307   11760 request.go:629] Waited for 143.9833ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.129.80:8441/api/v1/namespaces/kube-system/pods
	I0507 18:20:32.508307   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/namespaces/kube-system/pods
	I0507 18:20:32.508307   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:32.508307   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:32.508307   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:32.512877   11760 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:20:32.512877   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:32.513412   11760 round_trippers.go:580]     Audit-Id: 6c54975e-f6da-4b2b-b9dd-85a820efde06
	I0507 18:20:32.513412   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:32.513412   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:32.513412   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:32.513412   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:32.513479   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:32 GMT
	I0507 18:20:32.514711   11760 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"576"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-6b5v9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4925e3cc-31d5-477c-9966-4d533ba939a8","resourceVersion":"564","creationTimestamp":"2024-05-07T18:18:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"33493d02-30c0-46f7-b452-9489bb38d0ba","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T18:18:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"33493d02-30c0-46f7-b452-9489bb38d0ba\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 51210 chars]
	I0507 18:20:32.516952   11760 system_pods.go:59] 7 kube-system pods found
	I0507 18:20:32.517024   11760 system_pods.go:61] "coredns-7db6d8ff4d-6b5v9" [4925e3cc-31d5-477c-9966-4d533ba939a8] Running
	I0507 18:20:32.517024   11760 system_pods.go:61] "etcd-functional-527400" [9abcd377-8ba5-4666-afc7-fdb3f2a84083] Running
	I0507 18:20:32.517024   11760 system_pods.go:61] "kube-apiserver-functional-527400" [c4a7dba1-d1fe-49d4-bb75-72415782e9c2] Running
	I0507 18:20:32.517024   11760 system_pods.go:61] "kube-controller-manager-functional-527400" [3a4e6083-ef54-4e5f-b89d-51823bc2999b] Running
	I0507 18:20:32.517024   11760 system_pods.go:61] "kube-proxy-9lf2q" [728dcb3a-0eb1-45b5-92a6-35c6819af3bf] Running
	I0507 18:20:32.517024   11760 system_pods.go:61] "kube-scheduler-functional-527400" [12cb2956-7c05-444a-ae86-a409f3c4f7b5] Running
	I0507 18:20:32.517024   11760 system_pods.go:61] "storage-provisioner" [514d12a0-9694-41b7-9ed5-5ae68ad0a037] Running
	I0507 18:20:32.517099   11760 system_pods.go:74] duration metric: took 152.7782ms to wait for pod list to return data ...
	I0507 18:20:32.517099   11760 default_sa.go:34] waiting for default service account to be created ...
	I0507 18:20:32.700165   11760 request.go:629] Waited for 182.9851ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.129.80:8441/api/v1/namespaces/default/serviceaccounts
	I0507 18:20:32.700317   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/namespaces/default/serviceaccounts
	I0507 18:20:32.700317   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:32.700317   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:32.700317   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:32.703910   11760 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:20:32.704795   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:32.704795   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:32.704795   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:32.704795   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:32.704871   11760 round_trippers.go:580]     Content-Length: 261
	I0507 18:20:32.704871   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:32 GMT
	I0507 18:20:32.704871   11760 round_trippers.go:580]     Audit-Id: 4d451347-ef56-4b8c-be16-1997e3d13654
	I0507 18:20:32.704871   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:32.704954   11760 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"576"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"32ce8136-29ee-46b9-95b7-fe6b11d177b0","resourceVersion":"345","creationTimestamp":"2024-05-07T18:18:32Z"}}]}
	I0507 18:20:32.705270   11760 default_sa.go:45] found service account: "default"
	I0507 18:20:32.705270   11760 default_sa.go:55] duration metric: took 188.158ms for default service account to be created ...
	I0507 18:20:32.705270   11760 system_pods.go:116] waiting for k8s-apps to be running ...
	I0507 18:20:32.908001   11760 request.go:629] Waited for 202.5727ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.129.80:8441/api/v1/namespaces/kube-system/pods
	I0507 18:20:32.908119   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/namespaces/kube-system/pods
	I0507 18:20:32.908119   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:32.908119   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:32.908228   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:32.914608   11760 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0507 18:20:32.915056   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:32.915259   11760 round_trippers.go:580]     Audit-Id: 06030b67-48e5-4e21-82e2-4989d97b7c30
	I0507 18:20:32.915259   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:32.915259   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:32.915259   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:32.915259   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:32.915259   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:33 GMT
	I0507 18:20:32.916034   11760 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"576"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-6b5v9","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"4925e3cc-31d5-477c-9966-4d533ba939a8","resourceVersion":"564","creationTimestamp":"2024-05-07T18:18:33Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"33493d02-30c0-46f7-b452-9489bb38d0ba","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T18:18:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"33493d02-30c0-46f7-b452-9489bb38d0ba\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 51210 chars]
	I0507 18:20:32.918603   11760 system_pods.go:86] 7 kube-system pods found
	I0507 18:20:32.918690   11760 system_pods.go:89] "coredns-7db6d8ff4d-6b5v9" [4925e3cc-31d5-477c-9966-4d533ba939a8] Running
	I0507 18:20:32.918690   11760 system_pods.go:89] "etcd-functional-527400" [9abcd377-8ba5-4666-afc7-fdb3f2a84083] Running
	I0507 18:20:32.918690   11760 system_pods.go:89] "kube-apiserver-functional-527400" [c4a7dba1-d1fe-49d4-bb75-72415782e9c2] Running
	I0507 18:20:32.918690   11760 system_pods.go:89] "kube-controller-manager-functional-527400" [3a4e6083-ef54-4e5f-b89d-51823bc2999b] Running
	I0507 18:20:32.918690   11760 system_pods.go:89] "kube-proxy-9lf2q" [728dcb3a-0eb1-45b5-92a6-35c6819af3bf] Running
	I0507 18:20:32.918774   11760 system_pods.go:89] "kube-scheduler-functional-527400" [12cb2956-7c05-444a-ae86-a409f3c4f7b5] Running
	I0507 18:20:32.918774   11760 system_pods.go:89] "storage-provisioner" [514d12a0-9694-41b7-9ed5-5ae68ad0a037] Running
	I0507 18:20:32.918774   11760 system_pods.go:126] duration metric: took 213.4897ms to wait for k8s-apps to be running ...
	I0507 18:20:32.918774   11760 system_svc.go:44] waiting for kubelet service to be running ....
	I0507 18:20:32.927557   11760 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0507 18:20:32.951971   11760 system_svc.go:56] duration metric: took 33.1944ms WaitForService to wait for kubelet
	I0507 18:20:32.951971   11760 kubeadm.go:576] duration metric: took 3.0036118s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0507 18:20:32.951971   11760 node_conditions.go:102] verifying NodePressure condition ...
	I0507 18:20:33.099729   11760 request.go:629] Waited for 147.748ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.129.80:8441/api/v1/nodes
	I0507 18:20:33.099729   11760 round_trippers.go:463] GET https://172.19.129.80:8441/api/v1/nodes
	I0507 18:20:33.099729   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:33.099729   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:33.100007   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:33.104624   11760 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:20:33.104900   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:33.104945   11760 round_trippers.go:580]     Audit-Id: 2aaa1e53-5f4a-49e1-972d-e3b68129b1ee
	I0507 18:20:33.104945   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:33.104945   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:33.104945   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:33.104945   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:33.104945   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:33 GMT
	I0507 18:20:33.104945   11760 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"576"},"items":[{"metadata":{"name":"functional-527400","uid":"1f8009c3-5065-4e6e-94e6-3fbe2fdf4d26","resourceVersion":"493","creationTimestamp":"2024-05-07T18:18:15Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"functional-527400","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"functional-527400","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T18_18_19_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedF
ields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","ti [truncated 4840 chars]
	I0507 18:20:33.105736   11760 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0507 18:20:33.105736   11760 node_conditions.go:123] node cpu capacity is 2
	I0507 18:20:33.105736   11760 node_conditions.go:105] duration metric: took 153.755ms to run NodePressure ...
	I0507 18:20:33.105736   11760 start.go:240] waiting for startup goroutines ...
	I0507 18:20:33.999654   11760 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:20:33.999761   11760 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:20:33.999761   11760 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0507 18:20:33.999761   11760 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0507 18:20:33.999761   11760 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-527400 ).state
	I0507 18:20:34.004079   11760 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:20:34.004079   11760 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:20:34.004079   11760 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-527400 ).networkadapters[0]).ipaddresses[0]
	I0507 18:20:36.010109   11760 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:20:36.010109   11760 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:20:36.011165   11760 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-527400 ).networkadapters[0]).ipaddresses[0]
	I0507 18:20:36.361923   11760 main.go:141] libmachine: [stdout =====>] : 172.19.129.80
	
	I0507 18:20:36.361923   11760 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:20:36.362345   11760 sshutil.go:53] new ssh client: &{IP:172.19.129.80 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-527400\id_rsa Username:docker}
	I0507 18:20:36.500098   11760 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0507 18:20:37.214642   11760 command_runner.go:130] > serviceaccount/storage-provisioner unchanged
	I0507 18:20:37.214642   11760 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner unchanged
	I0507 18:20:37.214642   11760 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0507 18:20:37.214642   11760 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner unchanged
	I0507 18:20:37.214642   11760 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath unchanged
	I0507 18:20:37.214642   11760 command_runner.go:130] > pod/storage-provisioner configured
	I0507 18:20:38.349136   11760 main.go:141] libmachine: [stdout =====>] : 172.19.129.80
	
	I0507 18:20:38.349136   11760 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:20:38.349136   11760 sshutil.go:53] new ssh client: &{IP:172.19.129.80 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-527400\id_rsa Username:docker}
	I0507 18:20:38.469069   11760 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0507 18:20:38.613876   11760 command_runner.go:130] > storageclass.storage.k8s.io/standard unchanged
	I0507 18:20:38.614256   11760 round_trippers.go:463] GET https://172.19.129.80:8441/apis/storage.k8s.io/v1/storageclasses
	I0507 18:20:38.614256   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:38.614324   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:38.614363   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:38.617180   11760 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 18:20:38.617655   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:38.617690   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:38.617690   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:38.617690   11760 round_trippers.go:580]     Content-Length: 1273
	I0507 18:20:38.617690   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:38 GMT
	I0507 18:20:38.617690   11760 round_trippers.go:580]     Audit-Id: d3050d0c-820a-4f7c-8b84-677bffe1ed45
	I0507 18:20:38.617690   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:38.617690   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:38.617796   11760 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"583"},"items":[{"metadata":{"name":"standard","uid":"752e5820-e34f-4b6a-9eb5-b5252e7644dc","resourceVersion":"432","creationTimestamp":"2024-05-07T18:18:41Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-07T18:18:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0507 18:20:38.618603   11760 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"752e5820-e34f-4b6a-9eb5-b5252e7644dc","resourceVersion":"432","creationTimestamp":"2024-05-07T18:18:41Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-07T18:18:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0507 18:20:38.618691   11760 round_trippers.go:463] PUT https://172.19.129.80:8441/apis/storage.k8s.io/v1/storageclasses/standard
	I0507 18:20:38.618691   11760 round_trippers.go:469] Request Headers:
	I0507 18:20:38.618691   11760 round_trippers.go:473]     Content-Type: application/json
	I0507 18:20:38.618748   11760 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:20:38.618748   11760 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:20:38.622448   11760 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:20:38.622448   11760 round_trippers.go:577] Response Headers:
	I0507 18:20:38.622448   11760 round_trippers.go:580]     Audit-Id: d42279aa-f6e6-4706-be05-9846daab05d6
	I0507 18:20:38.622448   11760 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 18:20:38.622448   11760 round_trippers.go:580]     Content-Type: application/json
	I0507 18:20:38.622448   11760 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: df9d73ca-fbd3-4b1d-993b-1462852c9660
	I0507 18:20:38.622448   11760 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 675d16bf-2e41-4425-8765-4fd355caa152
	I0507 18:20:38.622448   11760 round_trippers.go:580]     Content-Length: 1220
	I0507 18:20:38.622448   11760 round_trippers.go:580]     Date: Tue, 07 May 2024 18:20:38 GMT
	I0507 18:20:38.622448   11760 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"752e5820-e34f-4b6a-9eb5-b5252e7644dc","resourceVersion":"432","creationTimestamp":"2024-05-07T18:18:41Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-07T18:18:41Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0507 18:20:38.626397   11760 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0507 18:20:38.629770   11760 addons.go:505] duration metric: took 8.6810207s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0507 18:20:38.629770   11760 start.go:245] waiting for cluster config update ...
	I0507 18:20:38.629770   11760 start.go:254] writing updated cluster config ...
	I0507 18:20:38.637817   11760 ssh_runner.go:195] Run: rm -f paused
	I0507 18:20:38.761897   11760 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0507 18:20:38.764619   11760 out.go:177] * Done! kubectl is now configured to use "functional-527400" cluster and "default" namespace by default
	
	
	==> Docker <==
	May 07 18:20:12 functional-527400 dockerd[3895]: time="2024-05-07T18:20:12.263312956Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 07 18:20:12 functional-527400 dockerd[3895]: time="2024-05-07T18:20:12.263384562Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 07 18:20:15 functional-527400 cri-dockerd[4119]: time="2024-05-07T18:20:15Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	May 07 18:20:16 functional-527400 dockerd[3895]: time="2024-05-07T18:20:16.291349637Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 07 18:20:16 functional-527400 dockerd[3895]: time="2024-05-07T18:20:16.291520951Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 07 18:20:16 functional-527400 dockerd[3895]: time="2024-05-07T18:20:16.291542952Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 07 18:20:16 functional-527400 dockerd[3895]: time="2024-05-07T18:20:16.291796873Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 07 18:20:16 functional-527400 dockerd[3895]: time="2024-05-07T18:20:16.371473561Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 07 18:20:16 functional-527400 dockerd[3895]: time="2024-05-07T18:20:16.372776265Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 07 18:20:16 functional-527400 dockerd[3895]: time="2024-05-07T18:20:16.373056088Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 07 18:20:16 functional-527400 dockerd[3895]: time="2024-05-07T18:20:16.374862933Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 07 18:20:16 functional-527400 dockerd[3895]: time="2024-05-07T18:20:16.407488648Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 07 18:20:16 functional-527400 dockerd[3895]: time="2024-05-07T18:20:16.407600657Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 07 18:20:16 functional-527400 dockerd[3895]: time="2024-05-07T18:20:16.407618859Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 07 18:20:16 functional-527400 dockerd[3895]: time="2024-05-07T18:20:16.407710466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 07 18:20:16 functional-527400 cri-dockerd[4119]: time="2024-05-07T18:20:16Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f19a90023a9c4003de25c75dbf12416c4e6ce4448340e76457e08242a7a65d66/resolv.conf as [nameserver 172.19.128.1]"
	May 07 18:20:16 functional-527400 cri-dockerd[4119]: time="2024-05-07T18:20:16Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/89bd4ca484f0a473b28a4066f892647535865b016ee7699e90bfb9569d4151eb/resolv.conf as [nameserver 172.19.128.1]"
	May 07 18:20:16 functional-527400 dockerd[3895]: time="2024-05-07T18:20:16.818556106Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 07 18:20:16 functional-527400 dockerd[3895]: time="2024-05-07T18:20:16.819545186Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 07 18:20:16 functional-527400 dockerd[3895]: time="2024-05-07T18:20:16.819718800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 07 18:20:16 functional-527400 dockerd[3895]: time="2024-05-07T18:20:16.820087729Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 07 18:20:17 functional-527400 dockerd[3895]: time="2024-05-07T18:20:17.163014361Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 07 18:20:17 functional-527400 dockerd[3895]: time="2024-05-07T18:20:17.163372888Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 07 18:20:17 functional-527400 dockerd[3895]: time="2024-05-07T18:20:17.163392190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 07 18:20:17 functional-527400 dockerd[3895]: time="2024-05-07T18:20:17.163520799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	86b4fed99448d       cbb01a7bd410d       About a minute ago   Running             coredns                   2                   89bd4ca484f0a       coredns-7db6d8ff4d-6b5v9
	671ca7df83c23       6e38f40d628db       About a minute ago   Running             storage-provisioner       2                   f19a90023a9c4       storage-provisioner
	28ce05187a5f1       a0bf559e280cf       About a minute ago   Running             kube-proxy                2                   ea783d7077180       kube-proxy-9lf2q
	e6978e4e58e25       3861cfcd7c04c       2 minutes ago        Running             etcd                      2                   17d9d2c7594a9       etcd-functional-527400
	cd98357f2eca0       259c8277fcbbc       2 minutes ago        Running             kube-scheduler            2                   61a36242e21fa       kube-scheduler-functional-527400
	06c713a398a26       c7aad43836fa5       2 minutes ago        Running             kube-controller-manager   2                   2eaa63fca7b00       kube-controller-manager-functional-527400
	25730830e78bd       c42f13656d0b2       2 minutes ago        Running             kube-apiserver            2                   27f5a89c17d40       kube-apiserver-functional-527400
	86b659da9cca9       cbb01a7bd410d       2 minutes ago        Created             coredns                   1                   98523c6db3963       coredns-7db6d8ff4d-6b5v9
	b108c264da689       c42f13656d0b2       2 minutes ago        Created             kube-apiserver            1                   0fae4a4499885       kube-apiserver-functional-527400
	06bb16ebd80e7       3861cfcd7c04c       2 minutes ago        Created             etcd                      1                   0972535768fae       etcd-functional-527400
	805faa80aeb92       c7aad43836fa5       2 minutes ago        Created             kube-controller-manager   1                   a8432805a72fe       kube-controller-manager-functional-527400
	7c0d9498c652d       259c8277fcbbc       2 minutes ago        Created             kube-scheduler            1                   fb89b64b69c2c       kube-scheduler-functional-527400
	835333bb04e92       a0bf559e280cf       2 minutes ago        Exited              kube-proxy                1                   8a54d5a8faae6       kube-proxy-9lf2q
	e12da0342bc8c       6e38f40d628db       2 minutes ago        Exited              storage-provisioner       1                   870e72eb89269       storage-provisioner
	
	
	==> coredns [86b4fed99448] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = a3820eb745a9a768a035bf81145ae0754aeb40457ffd5109db8c64dac842ada6c2edf6f9e6a410714e0f5cbc9cd90cb925a2fb37599adf58a40dc1bc5fa339b9
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:41162 - 26586 "HINFO IN 5463578991218974398.225286806236076514. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.032944807s
	
	
	==> coredns [86b659da9cca] <==
	
	
	==> describe nodes <==
	Name:               functional-527400
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-527400
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a2bee053733709aad5480b65159f65519e411d9f
	                    minikube.k8s.io/name=functional-527400
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_07T18_18_19_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 07 May 2024 18:18:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-527400
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 07 May 2024 18:22:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 07 May 2024 18:21:46 +0000   Tue, 07 May 2024 18:18:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 07 May 2024 18:21:46 +0000   Tue, 07 May 2024 18:18:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 07 May 2024 18:21:46 +0000   Tue, 07 May 2024 18:18:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 07 May 2024 18:21:46 +0000   Tue, 07 May 2024 18:18:21 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.129.80
	  Hostname:    functional-527400
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912872Ki
	  pods:               110
	System Info:
	  Machine ID:                 45345fd1a317479b9ee9242a5f31f00c
	  System UUID:                cbbc6ef1-1732-5846-af5e-ae9c5c0c4f25
	  Boot ID:                    57714b17-9a40-4348-950a-088907031ae3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-6b5v9                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     3m40s
	  kube-system                 etcd-functional-527400                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         3m55s
	  kube-system                 kube-apiserver-functional-527400             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m55s
	  kube-system                 kube-controller-manager-functional-527400    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m55s
	  kube-system                 kube-proxy-9lf2q                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m41s
	  kube-system                 kube-scheduler-functional-527400             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m55s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m33s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 3m39s                kube-proxy       
	  Normal  Starting                 116s                 kube-proxy       
	  Normal  NodeHasSufficientPID     4m2s (x7 over 4m2s)  kubelet          Node functional-527400 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    4m2s (x8 over 4m2s)  kubelet          Node functional-527400 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  4m2s (x8 over 4m2s)  kubelet          Node functional-527400 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  4m2s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 3m55s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  3m55s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m55s                kubelet          Node functional-527400 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m55s                kubelet          Node functional-527400 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m55s                kubelet          Node functional-527400 status is now: NodeHasSufficientPID
	  Normal  NodeReady                3m52s                kubelet          Node functional-527400 status is now: NodeReady
	  Normal  RegisteredNode           3m41s                node-controller  Node functional-527400 event: Registered Node functional-527400 in Controller
	  Normal  Starting                 2m3s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m3s (x8 over 2m3s)  kubelet          Node functional-527400 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m3s (x8 over 2m3s)  kubelet          Node functional-527400 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m3s (x7 over 2m3s)  kubelet          Node functional-527400 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m3s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           105s                 node-controller  Node functional-527400 event: Registered Node functional-527400 in Controller
	
	
	==> dmesg <==
	[May 7 18:18] kauditd_printk_skb: 12 callbacks suppressed
	[  +0.603941] systemd-fstab-generator[1525]: Ignoring "noauto" option for root device
	[  +5.778197] systemd-fstab-generator[1724]: Ignoring "noauto" option for root device
	[  +0.095075] kauditd_printk_skb: 51 callbacks suppressed
	[  +7.511940] systemd-fstab-generator[2126]: Ignoring "noauto" option for root device
	[  +0.114578] kauditd_printk_skb: 62 callbacks suppressed
	[ +14.811156] systemd-fstab-generator[2369]: Ignoring "noauto" option for root device
	[  +0.194948] kauditd_printk_skb: 12 callbacks suppressed
	[  +7.229170] kauditd_printk_skb: 71 callbacks suppressed
	[May 7 18:19] systemd-fstab-generator[3409]: Ignoring "noauto" option for root device
	[  +0.595841] systemd-fstab-generator[3445]: Ignoring "noauto" option for root device
	[  +0.221837] systemd-fstab-generator[3457]: Ignoring "noauto" option for root device
	[  +0.274524] systemd-fstab-generator[3471]: Ignoring "noauto" option for root device
	[  +5.256529] kauditd_printk_skb: 89 callbacks suppressed
	[May 7 18:20] systemd-fstab-generator[4072]: Ignoring "noauto" option for root device
	[  +0.206181] systemd-fstab-generator[4084]: Ignoring "noauto" option for root device
	[  +0.207934] systemd-fstab-generator[4096]: Ignoring "noauto" option for root device
	[  +0.273195] systemd-fstab-generator[4111]: Ignoring "noauto" option for root device
	[  +0.817995] systemd-fstab-generator[4261]: Ignoring "noauto" option for root device
	[  +0.735330] kauditd_printk_skb: 152 callbacks suppressed
	[  +3.709379] systemd-fstab-generator[5282]: Ignoring "noauto" option for root device
	[  +1.322255] kauditd_printk_skb: 81 callbacks suppressed
	[  +5.001465] kauditd_printk_skb: 38 callbacks suppressed
	[ +11.562500] kauditd_printk_skb: 6 callbacks suppressed
	[  +1.877295] systemd-fstab-generator[6187]: Ignoring "noauto" option for root device
	
	
	==> etcd [06bb16ebd80e] <==
	
	
	==> etcd [e6978e4e58e2] <==
	{"level":"info","ts":"2024-05-07T18:20:12.744468Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-07T18:20:12.744595Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-07T18:20:12.751447Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-05-07T18:20:12.752445Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e1e33d38edfa7ae6 switched to configuration voters=(16276920792967183078)"}
	{"level":"info","ts":"2024-05-07T18:20:12.758451Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"d590e6324cd31e49","local-member-id":"e1e33d38edfa7ae6","added-peer-id":"e1e33d38edfa7ae6","added-peer-peer-urls":["https://172.19.129.80:2380"]}
	{"level":"info","ts":"2024-05-07T18:20:12.758699Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d590e6324cd31e49","local-member-id":"e1e33d38edfa7ae6","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-07T18:20:12.758847Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-07T18:20:12.752543Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.19.129.80:2380"}
	{"level":"info","ts":"2024-05-07T18:20:12.783276Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.19.129.80:2380"}
	{"level":"info","ts":"2024-05-07T18:20:12.758506Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-07T18:20:12.758466Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"e1e33d38edfa7ae6","initial-advertise-peer-urls":["https://172.19.129.80:2380"],"listen-peer-urls":["https://172.19.129.80:2380"],"advertise-client-urls":["https://172.19.129.80:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.19.129.80:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-07T18:20:13.661474Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e1e33d38edfa7ae6 is starting a new election at term 2"}
	{"level":"info","ts":"2024-05-07T18:20:13.6616Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e1e33d38edfa7ae6 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-05-07T18:20:13.661933Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e1e33d38edfa7ae6 received MsgPreVoteResp from e1e33d38edfa7ae6 at term 2"}
	{"level":"info","ts":"2024-05-07T18:20:13.662047Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e1e33d38edfa7ae6 became candidate at term 3"}
	{"level":"info","ts":"2024-05-07T18:20:13.662125Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e1e33d38edfa7ae6 received MsgVoteResp from e1e33d38edfa7ae6 at term 3"}
	{"level":"info","ts":"2024-05-07T18:20:13.662284Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e1e33d38edfa7ae6 became leader at term 3"}
	{"level":"info","ts":"2024-05-07T18:20:13.662345Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e1e33d38edfa7ae6 elected leader e1e33d38edfa7ae6 at term 3"}
	{"level":"info","ts":"2024-05-07T18:20:13.675814Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"e1e33d38edfa7ae6","local-member-attributes":"{Name:functional-527400 ClientURLs:[https://172.19.129.80:2379]}","request-path":"/0/members/e1e33d38edfa7ae6/attributes","cluster-id":"d590e6324cd31e49","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-07T18:20:13.675915Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-07T18:20:13.67633Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-07T18:20:13.677174Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-07T18:20:13.676496Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-07T18:20:13.678374Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.19.129.80:2379"}
	{"level":"info","ts":"2024-05-07T18:20:13.680399Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 18:22:13 up 5 min,  0 users,  load average: 0.72, 0.94, 0.44
	Linux functional-527400 5.10.207 #1 SMP Tue Apr 30 22:38:43 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [25730830e78b] <==
	I0507 18:20:15.258388       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0507 18:20:15.258500       1 shared_informer.go:320] Caches are synced for configmaps
	I0507 18:20:15.258883       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0507 18:20:15.259231       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0507 18:20:15.259419       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0507 18:20:15.259872       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0507 18:20:15.266668       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0507 18:20:15.268909       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0507 18:20:15.268938       1 policy_source.go:224] refreshing policies
	I0507 18:20:15.269394       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0507 18:20:15.271207       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0507 18:20:15.271376       1 aggregator.go:165] initial CRD sync complete...
	I0507 18:20:15.271539       1 autoregister_controller.go:141] Starting autoregister controller
	I0507 18:20:15.271621       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0507 18:20:15.271778       1 cache.go:39] Caches are synced for autoregister controller
	E0507 18:20:15.272586       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0507 18:20:15.277839       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0507 18:20:16.071403       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0507 18:20:17.334704       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0507 18:20:17.361832       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0507 18:20:17.445440       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0507 18:20:17.521541       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0507 18:20:17.550011       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0507 18:20:28.084312       1 controller.go:615] quota admission added evaluator for: endpoints
	I0507 18:20:28.383816       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-apiserver [b108c264da68] <==
	
	
	==> kube-controller-manager [06c713a398a2] <==
	I0507 18:20:28.059417       1 shared_informer.go:320] Caches are synced for ephemeral
	I0507 18:20:28.078117       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0507 18:20:28.079416       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0507 18:20:28.080659       1 shared_informer.go:320] Caches are synced for deployment
	I0507 18:20:28.081492       1 shared_informer.go:320] Caches are synced for PVC protection
	I0507 18:20:28.086289       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0507 18:20:28.088002       1 shared_informer.go:320] Caches are synced for TTL
	I0507 18:20:28.091732       1 shared_informer.go:320] Caches are synced for persistent volume
	I0507 18:20:28.093410       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0507 18:20:28.095872       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0507 18:20:28.101353       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0507 18:20:28.101675       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="114.908µs"
	I0507 18:20:28.105852       1 shared_informer.go:320] Caches are synced for cronjob
	I0507 18:20:28.106178       1 shared_informer.go:320] Caches are synced for HPA
	I0507 18:20:28.131073       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0507 18:20:28.131227       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0507 18:20:28.131313       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0507 18:20:28.131400       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0507 18:20:28.135189       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0507 18:20:28.276554       1 shared_informer.go:320] Caches are synced for resource quota
	I0507 18:20:28.284703       1 shared_informer.go:320] Caches are synced for resource quota
	I0507 18:20:28.286270       1 shared_informer.go:320] Caches are synced for disruption
	I0507 18:20:28.717280       1 shared_informer.go:320] Caches are synced for garbage collector
	I0507 18:20:28.780322       1 shared_informer.go:320] Caches are synced for garbage collector
	I0507 18:20:28.780362       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	
	
	==> kube-controller-manager [805faa80aeb9] <==
	
	
	==> kube-proxy [28ce05187a5f] <==
	I0507 18:20:16.492739       1 server_linux.go:69] "Using iptables proxy"
	I0507 18:20:16.521568       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.19.129.80"]
	I0507 18:20:16.613653       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0507 18:20:16.613682       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0507 18:20:16.613700       1 server_linux.go:165] "Using iptables Proxier"
	I0507 18:20:16.618528       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0507 18:20:16.618738       1 server.go:872] "Version info" version="v1.30.0"
	I0507 18:20:16.618765       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0507 18:20:16.620416       1 config.go:192] "Starting service config controller"
	I0507 18:20:16.620435       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0507 18:20:16.620462       1 config.go:101] "Starting endpoint slice config controller"
	I0507 18:20:16.620467       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0507 18:20:16.622491       1 config.go:319] "Starting node config controller"
	I0507 18:20:16.623603       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0507 18:20:16.720616       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0507 18:20:16.720676       1 shared_informer.go:320] Caches are synced for service config
	I0507 18:20:16.724178       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [835333bb04e9] <==
	I0507 18:20:07.990167       1 server_linux.go:69] "Using iptables proxy"
	E0507 18:20:07.995613       1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8441/api/v1/nodes/functional-527400\": dial tcp 172.19.129.80:8441: connect: connection refused"
	
	
	==> kube-scheduler [7c0d9498c652] <==
	
	
	==> kube-scheduler [cd98357f2eca] <==
	I0507 18:20:13.416280       1 serving.go:380] Generated self-signed cert in-memory
	W0507 18:20:15.129857       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0507 18:20:15.129978       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0507 18:20:15.130005       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0507 18:20:15.130023       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0507 18:20:15.196497       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0507 18:20:15.196949       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0507 18:20:15.198830       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0507 18:20:15.199012       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0507 18:20:15.199025       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0507 18:20:15.199041       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0507 18:20:15.299283       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 07 18:20:15 functional-527400 kubelet[5304]: E0507 18:20:15.604979    5304 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-functional-527400\" already exists" pod="kube-system/kube-apiserver-functional-527400"
	May 07 18:20:15 functional-527400 kubelet[5304]: E0507 18:20:15.605763    5304 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-functional-527400\" already exists" pod="kube-system/kube-scheduler-functional-527400"
	May 07 18:20:15 functional-527400 kubelet[5304]: I0507 18:20:15.740332    5304 apiserver.go:52] "Watching apiserver"
	May 07 18:20:15 functional-527400 kubelet[5304]: I0507 18:20:15.754736    5304 topology_manager.go:215] "Topology Admit Handler" podUID="728dcb3a-0eb1-45b5-92a6-35c6819af3bf" podNamespace="kube-system" podName="kube-proxy-9lf2q"
	May 07 18:20:15 functional-527400 kubelet[5304]: I0507 18:20:15.754976    5304 topology_manager.go:215] "Topology Admit Handler" podUID="4925e3cc-31d5-477c-9966-4d533ba939a8" podNamespace="kube-system" podName="coredns-7db6d8ff4d-6b5v9"
	May 07 18:20:15 functional-527400 kubelet[5304]: I0507 18:20:15.755108    5304 topology_manager.go:215] "Topology Admit Handler" podUID="514d12a0-9694-41b7-9ed5-5ae68ad0a037" podNamespace="kube-system" podName="storage-provisioner"
	May 07 18:20:15 functional-527400 kubelet[5304]: I0507 18:20:15.760460    5304 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	May 07 18:20:15 functional-527400 kubelet[5304]: I0507 18:20:15.772728    5304 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/728dcb3a-0eb1-45b5-92a6-35c6819af3bf-lib-modules\") pod \"kube-proxy-9lf2q\" (UID: \"728dcb3a-0eb1-45b5-92a6-35c6819af3bf\") " pod="kube-system/kube-proxy-9lf2q"
	May 07 18:20:15 functional-527400 kubelet[5304]: I0507 18:20:15.772770    5304 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/728dcb3a-0eb1-45b5-92a6-35c6819af3bf-xtables-lock\") pod \"kube-proxy-9lf2q\" (UID: \"728dcb3a-0eb1-45b5-92a6-35c6819af3bf\") " pod="kube-system/kube-proxy-9lf2q"
	May 07 18:20:15 functional-527400 kubelet[5304]: I0507 18:20:15.772802    5304 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/514d12a0-9694-41b7-9ed5-5ae68ad0a037-tmp\") pod \"storage-provisioner\" (UID: \"514d12a0-9694-41b7-9ed5-5ae68ad0a037\") " pod="kube-system/storage-provisioner"
	May 07 18:20:16 functional-527400 kubelet[5304]: I0507 18:20:16.058362    5304 scope.go:117] "RemoveContainer" containerID="835333bb04e92a07f3c1d8bbd8719343438efc23c8ceac3b55603111b754afc1"
	May 07 18:20:16 functional-527400 kubelet[5304]: I0507 18:20:16.613982    5304 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f19a90023a9c4003de25c75dbf12416c4e6ce4448340e76457e08242a7a65d66"
	May 07 18:20:16 functional-527400 kubelet[5304]: I0507 18:20:16.743555    5304 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="89bd4ca484f0a473b28a4066f892647535865b016ee7699e90bfb9569d4151eb"
	May 07 18:20:18 functional-527400 kubelet[5304]: I0507 18:20:18.839808    5304 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	May 07 18:20:24 functional-527400 kubelet[5304]: I0507 18:20:24.695808    5304 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	May 07 18:21:10 functional-527400 kubelet[5304]: E0507 18:21:10.852419    5304 iptables.go:577] "Could not set up iptables canary" err=<
	May 07 18:21:10 functional-527400 kubelet[5304]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 07 18:21:10 functional-527400 kubelet[5304]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 07 18:21:10 functional-527400 kubelet[5304]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 07 18:21:10 functional-527400 kubelet[5304]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 07 18:22:10 functional-527400 kubelet[5304]: E0507 18:22:10.851719    5304 iptables.go:577] "Could not set up iptables canary" err=<
	May 07 18:22:10 functional-527400 kubelet[5304]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 07 18:22:10 functional-527400 kubelet[5304]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 07 18:22:10 functional-527400 kubelet[5304]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 07 18:22:10 functional-527400 kubelet[5304]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [671ca7df83c2] <==
	I0507 18:20:16.944229       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0507 18:20:16.961770       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0507 18:20:16.961857       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0507 18:20:34.369408       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0507 18:20:34.370216       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-527400_28a3cefb-dc8c-4f38-aba2-1b447d30d475!
	I0507 18:20:34.369751       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e2e7f395-ecb9-45f6-a027-fcc1cd674cbb", APIVersion:"v1", ResourceVersion:"577", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-527400_28a3cefb-dc8c-4f38-aba2-1b447d30d475 became leader
	I0507 18:20:34.470837       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-527400_28a3cefb-dc8c-4f38-aba2-1b447d30d475!
	
	
	==> storage-provisioner [e12da0342bc8] <==
	I0507 18:20:07.703997       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0507 18:20:07.714402       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0507 18:22:06.164716    7744 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-527400 -n functional-527400
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-527400 -n functional-527400: (10.8207421s)
helpers_test.go:261: (dbg) Run:  kubectl --context functional-527400 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestFunctional/serial/MinikubeKubectlCmdDirectly FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/serial/MinikubeKubectlCmdDirectly (30.14s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (1.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-527400 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-527400 config unset cpus" to be -""- but got *"W0507 18:25:00.802384    3680 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube5\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-527400 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-527400 config get cpus: exit status 14 (224.2517ms)

                                                
                                                
** stderr ** 
	W0507 18:25:01.080264    8040 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-527400 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W0507 18:25:01.080264    8040 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube5\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-527400 config set cpus 2
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-527400 config set cpus 2" to be -"! These changes will take effect upon a minikube delete and then a minikube start"- but got *"W0507 18:25:01.296923    9208 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube5\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\n! These changes will take effect upon a minikube delete and then a minikube start"*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-527400 config get cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-527400 config get cpus" to be -""- but got *"W0507 18:25:01.532987    7284 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube5\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-527400 config unset cpus
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-527400 config unset cpus" to be -""- but got *"W0507 18:25:01.751005    4288 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube5\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified."*
functional_test.go:1195: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-527400 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-527400 config get cpus: exit status 14 (190.0828ms)

                                                
                                                
** stderr ** 
	W0507 18:25:01.959288   11860 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1206: expected config error for "out/minikube-windows-amd64.exe -p functional-527400 config get cpus" to be -"Error: specified key could not be found in config"- but got *"W0507 18:25:01.959288   11860 main.go:291] Unable to resolve the current Docker CLI context \"default\": context \"default\": context not found: open C:\\Users\\jenkins.minikube5\\.docker\\contexts\\meta\\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\\meta.json: The system cannot find the path specified.\nError: specified key could not be found in config"*
--- FAIL: TestFunctional/parallel/ConfigCmd (1.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-527400 service --namespace=default --https --url hello-node
functional_test.go:1505: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-527400 service --namespace=default --https --url hello-node: exit status 1 (15.0381492s)

                                                
                                                
** stderr ** 
	W0507 18:25:42.095619   10404 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1507: failed to get service url. args "out/minikube-windows-amd64.exe -p functional-527400 service --namespace=default --https --url hello-node" : exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (15.04s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-527400 service hello-node --url --format={{.IP}}
functional_test.go:1536: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-527400 service hello-node --url --format={{.IP}}: exit status 1 (15.0182352s)

                                                
                                                
** stderr ** 
	W0507 18:25:57.117580   14020 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1538: failed to get service url with custom format. args "out/minikube-windows-amd64.exe -p functional-527400 service hello-node --url --format={{.IP}}": exit status 1
functional_test.go:1544: "" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (15.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-527400 service hello-node --url
functional_test.go:1555: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-527400 service hello-node --url: exit status 1 (15.0325283s)

                                                
                                                
** stderr ** 
	W0507 18:26:12.132351    4380 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
functional_test.go:1557: failed to get service url. args: "out/minikube-windows-amd64.exe -p functional-527400 service hello-node --url": exit status 1
functional_test.go:1561: found endpoint for hello-node: 
functional_test.go:1569: expected scheme to be -"http"- got scheme: *""*
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (15.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (63.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-210800 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-210800 -- exec busybox-fc5497c4f-45d7p -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-210800 -- exec busybox-fc5497c4f-45d7p -- sh -c "ping -c 1 172.19.128.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-210800 -- exec busybox-fc5497c4f-45d7p -- sh -c "ping -c 1 172.19.128.1": exit status 1 (10.4139134s)

                                                
                                                
-- stdout --
	PING 172.19.128.1 (172.19.128.1): 56 data bytes
	
	--- 172.19.128.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0507 18:42:40.226468    6348 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.19.128.1) from pod (busybox-fc5497c4f-45d7p): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-210800 -- exec busybox-fc5497c4f-5z998 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-210800 -- exec busybox-fc5497c4f-5z998 -- sh -c "ping -c 1 172.19.128.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-210800 -- exec busybox-fc5497c4f-5z998 -- sh -c "ping -c 1 172.19.128.1": exit status 1 (10.4174079s)

                                                
                                                
-- stdout --
	PING 172.19.128.1 (172.19.128.1): 56 data bytes
	
	--- 172.19.128.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0507 18:42:51.062918    2988 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.19.128.1) from pod (busybox-fc5497c4f-5z998): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-210800 -- exec busybox-fc5497c4f-pkgxl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-210800 -- exec busybox-fc5497c4f-pkgxl -- sh -c "ping -c 1 172.19.128.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-210800 -- exec busybox-fc5497c4f-pkgxl -- sh -c "ping -c 1 172.19.128.1": exit status 1 (10.4138597s)

                                                
                                                
-- stdout --
	PING 172.19.128.1 (172.19.128.1): 56 data bytes
	
	--- 172.19.128.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0507 18:43:01.898126    6840 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.19.128.1) from pod (busybox-fc5497c4f-pkgxl): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-210800 -n ha-210800
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-210800 -n ha-210800: (10.8841971s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-210800 logs -n 25
E0507 18:43:26.338861    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\client.crt: The system cannot find the path specified.
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-210800 logs -n 25: (7.7436594s)
helpers_test.go:252: TestMultiControlPlane/serial/PingHostFromPods logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| image   | functional-527400 image build -t     | functional-527400 | minikube5\jenkins | v1.33.0 | 07 May 24 18:28 UTC | 07 May 24 18:28 UTC |
	|         | localhost/my-image:functional-527400 |                   |                   |         |                     |                     |
	|         | testdata\build --alsologtostderr     |                   |                   |         |                     |                     |
	| image   | functional-527400                    | functional-527400 | minikube5\jenkins | v1.33.0 | 07 May 24 18:28 UTC | 07 May 24 18:28 UTC |
	|         | image ls --format table              |                   |                   |         |                     |                     |
	|         | --alsologtostderr                    |                   |                   |         |                     |                     |
	| image   | functional-527400 image ls           | functional-527400 | minikube5\jenkins | v1.33.0 | 07 May 24 18:28 UTC | 07 May 24 18:28 UTC |
	| delete  | -p functional-527400                 | functional-527400 | minikube5\jenkins | v1.33.0 | 07 May 24 18:30 UTC | 07 May 24 18:31 UTC |
	| start   | -p ha-210800 --wait=true             | ha-210800         | minikube5\jenkins | v1.33.0 | 07 May 24 18:31 UTC | 07 May 24 18:41 UTC |
	|         | --memory=2200 --ha                   |                   |                   |         |                     |                     |
	|         | -v=7 --alsologtostderr               |                   |                   |         |                     |                     |
	|         | --driver=hyperv                      |                   |                   |         |                     |                     |
	| kubectl | -p ha-210800 -- apply -f             | ha-210800         | minikube5\jenkins | v1.33.0 | 07 May 24 18:42 UTC | 07 May 24 18:42 UTC |
	|         | ./testdata/ha/ha-pod-dns-test.yaml   |                   |                   |         |                     |                     |
	| kubectl | -p ha-210800 -- rollout status       | ha-210800         | minikube5\jenkins | v1.33.0 | 07 May 24 18:42 UTC | 07 May 24 18:42 UTC |
	|         | deployment/busybox                   |                   |                   |         |                     |                     |
	| kubectl | -p ha-210800 -- get pods -o          | ha-210800         | minikube5\jenkins | v1.33.0 | 07 May 24 18:42 UTC | 07 May 24 18:42 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |         |                     |                     |
	| kubectl | -p ha-210800 -- get pods -o          | ha-210800         | minikube5\jenkins | v1.33.0 | 07 May 24 18:42 UTC | 07 May 24 18:42 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |                   |                   |         |                     |                     |
	| kubectl | -p ha-210800 -- exec                 | ha-210800         | minikube5\jenkins | v1.33.0 | 07 May 24 18:42 UTC | 07 May 24 18:42 UTC |
	|         | busybox-fc5497c4f-45d7p --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-210800 -- exec                 | ha-210800         | minikube5\jenkins | v1.33.0 | 07 May 24 18:42 UTC | 07 May 24 18:42 UTC |
	|         | busybox-fc5497c4f-5z998 --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-210800 -- exec                 | ha-210800         | minikube5\jenkins | v1.33.0 | 07 May 24 18:42 UTC | 07 May 24 18:42 UTC |
	|         | busybox-fc5497c4f-pkgxl --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-210800 -- exec                 | ha-210800         | minikube5\jenkins | v1.33.0 | 07 May 24 18:42 UTC | 07 May 24 18:42 UTC |
	|         | busybox-fc5497c4f-45d7p --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-210800 -- exec                 | ha-210800         | minikube5\jenkins | v1.33.0 | 07 May 24 18:42 UTC | 07 May 24 18:42 UTC |
	|         | busybox-fc5497c4f-5z998 --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-210800 -- exec                 | ha-210800         | minikube5\jenkins | v1.33.0 | 07 May 24 18:42 UTC | 07 May 24 18:42 UTC |
	|         | busybox-fc5497c4f-pkgxl --           |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-210800 -- exec                 | ha-210800         | minikube5\jenkins | v1.33.0 | 07 May 24 18:42 UTC | 07 May 24 18:42 UTC |
	|         | busybox-fc5497c4f-45d7p -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-210800 -- exec                 | ha-210800         | minikube5\jenkins | v1.33.0 | 07 May 24 18:42 UTC | 07 May 24 18:42 UTC |
	|         | busybox-fc5497c4f-5z998 -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-210800 -- exec                 | ha-210800         | minikube5\jenkins | v1.33.0 | 07 May 24 18:42 UTC | 07 May 24 18:42 UTC |
	|         | busybox-fc5497c4f-pkgxl -- nslookup  |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-210800 -- get pods -o          | ha-210800         | minikube5\jenkins | v1.33.0 | 07 May 24 18:42 UTC | 07 May 24 18:42 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |                   |                   |         |                     |                     |
	| kubectl | -p ha-210800 -- exec                 | ha-210800         | minikube5\jenkins | v1.33.0 | 07 May 24 18:42 UTC | 07 May 24 18:42 UTC |
	|         | busybox-fc5497c4f-45d7p              |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-210800 -- exec                 | ha-210800         | minikube5\jenkins | v1.33.0 | 07 May 24 18:42 UTC |                     |
	|         | busybox-fc5497c4f-45d7p -- sh        |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.19.128.1            |                   |                   |         |                     |                     |
	| kubectl | -p ha-210800 -- exec                 | ha-210800         | minikube5\jenkins | v1.33.0 | 07 May 24 18:42 UTC | 07 May 24 18:42 UTC |
	|         | busybox-fc5497c4f-5z998              |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-210800 -- exec                 | ha-210800         | minikube5\jenkins | v1.33.0 | 07 May 24 18:42 UTC |                     |
	|         | busybox-fc5497c4f-5z998 -- sh        |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.19.128.1            |                   |                   |         |                     |                     |
	| kubectl | -p ha-210800 -- exec                 | ha-210800         | minikube5\jenkins | v1.33.0 | 07 May 24 18:43 UTC | 07 May 24 18:43 UTC |
	|         | busybox-fc5497c4f-pkgxl              |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-210800 -- exec                 | ha-210800         | minikube5\jenkins | v1.33.0 | 07 May 24 18:43 UTC |                     |
	|         | busybox-fc5497c4f-pkgxl -- sh        |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.19.128.1            |                   |                   |         |                     |                     |
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/07 18:31:40
	Running on machine: minikube5
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0507 18:31:40.267319    8396 out.go:291] Setting OutFile to fd 792 ...
	I0507 18:31:40.268458    8396 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 18:31:40.268458    8396 out.go:304] Setting ErrFile to fd 916...
	I0507 18:31:40.268458    8396 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 18:31:40.286174    8396 out.go:298] Setting JSON to false
	I0507 18:31:40.293256    8396 start.go:129] hostinfo: {"hostname":"minikube5","uptime":22618,"bootTime":1715084081,"procs":193,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0507 18:31:40.293330    8396 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0507 18:31:40.319405    8396 out.go:177] * [ha-210800] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0507 18:31:40.324555    8396 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0507 18:31:40.323436    8396 notify.go:220] Checking for updates...
	I0507 18:31:40.327534    8396 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0507 18:31:40.329963    8396 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0507 18:31:40.332037    8396 out.go:177]   - MINIKUBE_LOCATION=18804
	I0507 18:31:40.341206    8396 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0507 18:31:40.346242    8396 driver.go:392] Setting default libvirt URI to qemu:///system
	I0507 18:31:45.132150    8396 out.go:177] * Using the hyperv driver based on user configuration
	I0507 18:31:45.134279    8396 start.go:297] selected driver: hyperv
	I0507 18:31:45.134331    8396 start.go:901] validating driver "hyperv" against <nil>
	I0507 18:31:45.134331    8396 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0507 18:31:45.180293    8396 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0507 18:31:45.181141    8396 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0507 18:31:45.181141    8396 cni.go:84] Creating CNI manager for ""
	I0507 18:31:45.181141    8396 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0507 18:31:45.181141    8396 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0507 18:31:45.181601    8396 start.go:340] cluster config:
	{Name:ha-210800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-210800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0507 18:31:45.181601    8396 iso.go:125] acquiring lock: {Name:mk4977609d05da04fcecf95837b3381fb1950afd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0507 18:31:45.187134    8396 out.go:177] * Starting "ha-210800" primary control-plane node in "ha-210800" cluster
	I0507 18:31:45.189802    8396 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0507 18:31:45.190005    8396 preload.go:147] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0507 18:31:45.190005    8396 cache.go:56] Caching tarball of preloaded images
	I0507 18:31:45.190302    8396 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0507 18:31:45.190455    8396 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0507 18:31:45.190996    8396 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\config.json ...
	I0507 18:31:45.191191    8396 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\config.json: {Name:mkd92c4604bf507480a04d8ffc294646ec1e422b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 18:31:45.192083    8396 start.go:360] acquireMachinesLock for ha-210800: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 18:31:45.192083    8396 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-210800"
	I0507 18:31:45.192083    8396 start.go:93] Provisioning new machine with config: &{Name:ha-210800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.0 ClusterName:ha-210800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0507 18:31:45.192083    8396 start.go:125] createHost starting for "" (driver="hyperv")
	I0507 18:31:45.194277    8396 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0507 18:31:45.194277    8396 start.go:159] libmachine.API.Create for "ha-210800" (driver="hyperv")
	I0507 18:31:45.194277    8396 client.go:168] LocalClient.Create starting
	I0507 18:31:45.195275    8396 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0507 18:31:45.195275    8396 main.go:141] libmachine: Decoding PEM data...
	I0507 18:31:45.195275    8396 main.go:141] libmachine: Parsing certificate...
	I0507 18:31:45.195799    8396 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0507 18:31:45.195835    8396 main.go:141] libmachine: Decoding PEM data...
	I0507 18:31:45.195835    8396 main.go:141] libmachine: Parsing certificate...
	I0507 18:31:45.195835    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0507 18:31:46.990897    8396 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0507 18:31:46.990897    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:31:46.991848    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0507 18:31:48.515381    8396 main.go:141] libmachine: [stdout =====>] : False
	
	I0507 18:31:48.516214    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:31:48.516214    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0507 18:31:49.807093    8396 main.go:141] libmachine: [stdout =====>] : True
	
	I0507 18:31:49.807093    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:31:49.807093    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0507 18:31:52.978461    8396 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0507 18:31:52.978461    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:31:52.981010    8396 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso...
	I0507 18:31:53.310864    8396 main.go:141] libmachine: Creating SSH key...
	I0507 18:31:53.566648    8396 main.go:141] libmachine: Creating VM...
	I0507 18:31:53.566648    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0507 18:31:56.065672    8396 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0507 18:31:56.065672    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:31:56.066034    8396 main.go:141] libmachine: Using switch "Default Switch"
	I0507 18:31:56.066231    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0507 18:31:57.622894    8396 main.go:141] libmachine: [stdout =====>] : True
	
	I0507 18:31:57.622894    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:31:57.622970    8396 main.go:141] libmachine: Creating VHD
	I0507 18:31:57.622970    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800\fixed.vhd' -SizeBytes 10MB -Fixed
	I0507 18:32:01.075430    8396 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 34D3DCE6-8404-4989-9D3E-495162DF6FFE
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0507 18:32:01.075430    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:32:01.075820    8396 main.go:141] libmachine: Writing magic tar header
	I0507 18:32:01.075920    8396 main.go:141] libmachine: Writing SSH key tar header
	I0507 18:32:01.086634    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800\disk.vhd' -VHDType Dynamic -DeleteSource
	I0507 18:32:04.117009    8396 main.go:141] libmachine: [stdout =====>] : 
	I0507 18:32:04.117009    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:32:04.117321    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800\disk.vhd' -SizeBytes 20000MB
	I0507 18:32:06.500411    8396 main.go:141] libmachine: [stdout =====>] : 
	I0507 18:32:06.501049    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:32:06.501121    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-210800 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0507 18:32:09.767260    8396 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-210800 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0507 18:32:09.767260    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:32:09.768258    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-210800 -DynamicMemoryEnabled $false
	I0507 18:32:11.836796    8396 main.go:141] libmachine: [stdout =====>] : 
	I0507 18:32:11.836796    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:32:11.837329    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-210800 -Count 2
	I0507 18:32:13.797109    8396 main.go:141] libmachine: [stdout =====>] : 
	I0507 18:32:13.797109    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:32:13.797610    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-210800 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800\boot2docker.iso'
	I0507 18:32:16.071910    8396 main.go:141] libmachine: [stdout =====>] : 
	I0507 18:32:16.071910    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:32:16.071910    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-210800 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800\disk.vhd'
	I0507 18:32:18.412166    8396 main.go:141] libmachine: [stdout =====>] : 
	I0507 18:32:18.412166    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:32:18.412166    8396 main.go:141] libmachine: Starting VM...
	I0507 18:32:18.412455    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-210800
	I0507 18:32:21.217165    8396 main.go:141] libmachine: [stdout =====>] : 
	I0507 18:32:21.217165    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:32:21.217165    8396 main.go:141] libmachine: Waiting for host to start...
	I0507 18:32:21.217872    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800 ).state
	I0507 18:32:23.247435    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:32:23.247435    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:32:23.248062    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800 ).networkadapters[0]).ipaddresses[0]
	I0507 18:32:25.523052    8396 main.go:141] libmachine: [stdout =====>] : 
	I0507 18:32:25.523052    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:32:26.537349    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800 ).state
	I0507 18:32:28.501694    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:32:28.501694    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:32:28.501694    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800 ).networkadapters[0]).ipaddresses[0]
	I0507 18:32:30.793866    8396 main.go:141] libmachine: [stdout =====>] : 
	I0507 18:32:30.793903    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:32:31.796667    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800 ).state
	I0507 18:32:33.782694    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:32:33.782694    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:32:33.782694    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800 ).networkadapters[0]).ipaddresses[0]
	I0507 18:32:36.030848    8396 main.go:141] libmachine: [stdout =====>] : 
	I0507 18:32:36.031871    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:32:37.048032    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800 ).state
	I0507 18:32:38.998796    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:32:38.998796    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:32:38.999870    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800 ).networkadapters[0]).ipaddresses[0]
	I0507 18:32:41.247235    8396 main.go:141] libmachine: [stdout =====>] : 
	I0507 18:32:41.247235    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:32:42.259808    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800 ).state
	I0507 18:32:44.263282    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:32:44.263282    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:32:44.263398    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800 ).networkadapters[0]).ipaddresses[0]
	I0507 18:32:46.596942    8396 main.go:141] libmachine: [stdout =====>] : 172.19.132.69
	
	I0507 18:32:46.596942    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:32:46.597430    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800 ).state
	I0507 18:32:48.496366    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:32:48.496428    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:32:48.496428    8396 machine.go:94] provisionDockerMachine start ...
	I0507 18:32:48.496428    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800 ).state
	I0507 18:32:50.432228    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:32:50.433158    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:32:50.433158    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800 ).networkadapters[0]).ipaddresses[0]
	I0507 18:32:52.710668    8396 main.go:141] libmachine: [stdout =====>] : 172.19.132.69
	
	I0507 18:32:52.710668    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:32:52.716204    8396 main.go:141] libmachine: Using SSH client type: native
	I0507 18:32:52.728730    8396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.132.69 22 <nil> <nil>}
	I0507 18:32:52.728730    8396 main.go:141] libmachine: About to run SSH command:
	hostname
	I0507 18:32:52.872442    8396 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0507 18:32:52.872442    8396 buildroot.go:166] provisioning hostname "ha-210800"
	I0507 18:32:52.872442    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800 ).state
	I0507 18:32:54.725719    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:32:54.725795    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:32:54.725795    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800 ).networkadapters[0]).ipaddresses[0]
	I0507 18:32:57.035116    8396 main.go:141] libmachine: [stdout =====>] : 172.19.132.69
	
	I0507 18:32:57.035116    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:32:57.039171    8396 main.go:141] libmachine: Using SSH client type: native
	I0507 18:32:57.039790    8396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.132.69 22 <nil> <nil>}
	I0507 18:32:57.039790    8396 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-210800 && echo "ha-210800" | sudo tee /etc/hostname
	I0507 18:32:57.212271    8396 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-210800
	
	I0507 18:32:57.212507    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800 ).state
	I0507 18:32:59.089825    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:32:59.090473    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:32:59.090554    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800 ).networkadapters[0]).ipaddresses[0]
	I0507 18:33:01.378519    8396 main.go:141] libmachine: [stdout =====>] : 172.19.132.69
	
	I0507 18:33:01.378546    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:33:01.382441    8396 main.go:141] libmachine: Using SSH client type: native
	I0507 18:33:01.383067    8396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.132.69 22 <nil> <nil>}
	I0507 18:33:01.383067    8396 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-210800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-210800/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-210800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0507 18:33:01.536192    8396 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0507 18:33:01.536275    8396 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0507 18:33:01.536275    8396 buildroot.go:174] setting up certificates
	I0507 18:33:01.536374    8396 provision.go:84] configureAuth start
	I0507 18:33:01.536494    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800 ).state
	I0507 18:33:03.485418    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:33:03.485418    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:33:03.486470    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800 ).networkadapters[0]).ipaddresses[0]
	I0507 18:33:05.836813    8396 main.go:141] libmachine: [stdout =====>] : 172.19.132.69
	
	I0507 18:33:05.836813    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:33:05.836920    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800 ).state
	I0507 18:33:07.770118    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:33:07.770118    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:33:07.770227    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800 ).networkadapters[0]).ipaddresses[0]
	I0507 18:33:10.127424    8396 main.go:141] libmachine: [stdout =====>] : 172.19.132.69
	
	I0507 18:33:10.127424    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:33:10.128242    8396 provision.go:143] copyHostCerts
	I0507 18:33:10.128437    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0507 18:33:10.128816    8396 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0507 18:33:10.128888    8396 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0507 18:33:10.129437    8396 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0507 18:33:10.130813    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0507 18:33:10.131166    8396 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0507 18:33:10.131166    8396 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0507 18:33:10.131593    8396 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0507 18:33:10.132747    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0507 18:33:10.133396    8396 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0507 18:33:10.133396    8396 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0507 18:33:10.133396    8396 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0507 18:33:10.134655    8396 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-210800 san=[127.0.0.1 172.19.132.69 ha-210800 localhost minikube]
	I0507 18:33:10.415997    8396 provision.go:177] copyRemoteCerts
	I0507 18:33:10.423371    8396 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0507 18:33:10.423371    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800 ).state
	I0507 18:33:12.385601    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:33:12.385674    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:33:12.385745    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800 ).networkadapters[0]).ipaddresses[0]
	I0507 18:33:14.679663    8396 main.go:141] libmachine: [stdout =====>] : 172.19.132.69
	
	I0507 18:33:14.679663    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:33:14.679663    8396 sshutil.go:53] new ssh client: &{IP:172.19.132.69 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800\id_rsa Username:docker}
	I0507 18:33:14.783974    8396 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.3603061s)
	I0507 18:33:14.783974    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0507 18:33:14.783974    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0507 18:33:14.824071    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0507 18:33:14.824983    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I0507 18:33:14.879080    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0507 18:33:14.879491    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0507 18:33:14.924186    8396 provision.go:87] duration metric: took 13.3868373s to configureAuth
	I0507 18:33:14.924280    8396 buildroot.go:189] setting minikube options for container-runtime
	I0507 18:33:14.924509    8396 config.go:182] Loaded profile config "ha-210800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 18:33:14.924509    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800 ).state
	I0507 18:33:16.861966    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:33:16.862036    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:33:16.862036    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800 ).networkadapters[0]).ipaddresses[0]
	I0507 18:33:19.148757    8396 main.go:141] libmachine: [stdout =====>] : 172.19.132.69
	
	I0507 18:33:19.148757    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:33:19.154957    8396 main.go:141] libmachine: Using SSH client type: native
	I0507 18:33:19.155062    8396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.132.69 22 <nil> <nil>}
	I0507 18:33:19.155062    8396 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0507 18:33:19.292639    8396 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0507 18:33:19.292639    8396 buildroot.go:70] root file system type: tmpfs
	I0507 18:33:19.292639    8396 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0507 18:33:19.293173    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800 ).state
	I0507 18:33:21.137238    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:33:21.137238    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:33:21.137238    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800 ).networkadapters[0]).ipaddresses[0]
	I0507 18:33:23.429102    8396 main.go:141] libmachine: [stdout =====>] : 172.19.132.69
	
	I0507 18:33:23.429102    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:33:23.433478    8396 main.go:141] libmachine: Using SSH client type: native
	I0507 18:33:23.434175    8396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.132.69 22 <nil> <nil>}
	I0507 18:33:23.434175    8396 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0507 18:33:23.598916    8396 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0507 18:33:23.598916    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800 ).state
	I0507 18:33:25.537242    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:33:25.537242    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:33:25.537513    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800 ).networkadapters[0]).ipaddresses[0]
	I0507 18:33:27.814354    8396 main.go:141] libmachine: [stdout =====>] : 172.19.132.69
	
	I0507 18:33:27.814354    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:33:27.818357    8396 main.go:141] libmachine: Using SSH client type: native
	I0507 18:33:27.818520    8396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.132.69 22 <nil> <nil>}
	I0507 18:33:27.818520    8396 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0507 18:33:29.905831    8396 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0507 18:33:29.905831    8396 machine.go:97] duration metric: took 41.4065907s to provisionDockerMachine
	I0507 18:33:29.905831    8396 client.go:171] duration metric: took 1m44.7044941s to LocalClient.Create
	I0507 18:33:29.905831    8396 start.go:167] duration metric: took 1m44.7044941s to libmachine.API.Create "ha-210800"
	I0507 18:33:29.905831    8396 start.go:293] postStartSetup for "ha-210800" (driver="hyperv")
	I0507 18:33:29.906450    8396 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0507 18:33:29.916237    8396 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0507 18:33:29.916237    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800 ).state
	I0507 18:33:31.820974    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:33:31.820974    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:33:31.822033    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800 ).networkadapters[0]).ipaddresses[0]
	I0507 18:33:34.093386    8396 main.go:141] libmachine: [stdout =====>] : 172.19.132.69
	
	I0507 18:33:34.093386    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:33:34.093386    8396 sshutil.go:53] new ssh client: &{IP:172.19.132.69 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800\id_rsa Username:docker}
	I0507 18:33:34.198272    8396 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.2817432s)
	I0507 18:33:34.209356    8396 ssh_runner.go:195] Run: cat /etc/os-release
	I0507 18:33:34.216100    8396 info.go:137] Remote host: Buildroot 2023.02.9
	I0507 18:33:34.216100    8396 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0507 18:33:34.216100    8396 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0507 18:33:34.217092    8396 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem -> 99922.pem in /etc/ssl/certs
	I0507 18:33:34.217179    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem -> /etc/ssl/certs/99922.pem
	I0507 18:33:34.226054    8396 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0507 18:33:34.242554    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem --> /etc/ssl/certs/99922.pem (1708 bytes)
	I0507 18:33:34.283920    8396 start.go:296] duration metric: took 4.3771294s for postStartSetup
	I0507 18:33:34.287294    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800 ).state
	I0507 18:33:36.226853    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:33:36.226853    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:33:36.227084    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800 ).networkadapters[0]).ipaddresses[0]
	I0507 18:33:38.586722    8396 main.go:141] libmachine: [stdout =====>] : 172.19.132.69
	
	I0507 18:33:38.586722    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:33:38.586722    8396 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\config.json ...
	I0507 18:33:38.590493    8396 start.go:128] duration metric: took 1m53.390759s to createHost
	I0507 18:33:38.590493    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800 ).state
	I0507 18:33:40.538496    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:33:40.538496    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:33:40.538562    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800 ).networkadapters[0]).ipaddresses[0]
	I0507 18:33:42.879129    8396 main.go:141] libmachine: [stdout =====>] : 172.19.132.69
	
	I0507 18:33:42.879129    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:33:42.883780    8396 main.go:141] libmachine: Using SSH client type: native
	I0507 18:33:42.884386    8396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.132.69 22 <nil> <nil>}
	I0507 18:33:42.884386    8396 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0507 18:33:43.023569    8396 main.go:141] libmachine: SSH cmd err, output: <nil>: 1715106823.242283585
	
	I0507 18:33:43.023662    8396 fix.go:216] guest clock: 1715106823.242283585
	I0507 18:33:43.023662    8396 fix.go:229] Guest: 2024-05-07 18:33:43.242283585 +0000 UTC Remote: 2024-05-07 18:33:38.5904938 +0000 UTC m=+118.435968701 (delta=4.651789785s)
	I0507 18:33:43.023662    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800 ).state
	I0507 18:33:44.945633    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:33:44.945633    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:33:44.946422    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800 ).networkadapters[0]).ipaddresses[0]
	I0507 18:33:47.234846    8396 main.go:141] libmachine: [stdout =====>] : 172.19.132.69
	
	I0507 18:33:47.234846    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:33:47.238995    8396 main.go:141] libmachine: Using SSH client type: native
	I0507 18:33:47.239318    8396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.132.69 22 <nil> <nil>}
	I0507 18:33:47.239393    8396 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1715106823
	I0507 18:33:47.386230    8396 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue May  7 18:33:43 UTC 2024
	
	I0507 18:33:47.386230    8396 fix.go:236] clock set: Tue May  7 18:33:43 UTC 2024
	 (err=<nil>)
	I0507 18:33:47.386230    8396 start.go:83] releasing machines lock for "ha-210800", held for 2m2.1858946s
	I0507 18:33:47.386770    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800 ).state
	I0507 18:33:49.320393    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:33:49.320393    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:33:49.321325    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800 ).networkadapters[0]).ipaddresses[0]
	I0507 18:33:51.655612    8396 main.go:141] libmachine: [stdout =====>] : 172.19.132.69
	
	I0507 18:33:51.655612    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:33:51.659402    8396 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0507 18:33:51.659488    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800 ).state
	I0507 18:33:51.666467    8396 ssh_runner.go:195] Run: cat /version.json
	I0507 18:33:51.666467    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800 ).state
	I0507 18:33:53.608511    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:33:53.608511    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:33:53.608511    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800 ).networkadapters[0]).ipaddresses[0]
	I0507 18:33:53.627212    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:33:53.627212    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:33:53.628025    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800 ).networkadapters[0]).ipaddresses[0]
	I0507 18:33:55.978727    8396 main.go:141] libmachine: [stdout =====>] : 172.19.132.69
	
	I0507 18:33:55.978931    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:33:55.978931    8396 sshutil.go:53] new ssh client: &{IP:172.19.132.69 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800\id_rsa Username:docker}
	I0507 18:33:56.005146    8396 main.go:141] libmachine: [stdout =====>] : 172.19.132.69
	
	I0507 18:33:56.005146    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:33:56.005771    8396 sshutil.go:53] new ssh client: &{IP:172.19.132.69 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800\id_rsa Username:docker}
	I0507 18:33:56.087130    8396 ssh_runner.go:235] Completed: cat /version.json: (4.4203604s)
	I0507 18:33:56.095373    8396 ssh_runner.go:195] Run: systemctl --version
	I0507 18:33:56.156007    8396 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.4962973s)
	I0507 18:33:56.166277    8396 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0507 18:33:56.175322    8396 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0507 18:33:56.184724    8396 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0507 18:33:56.212937    8396 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0507 18:33:56.212937    8396 start.go:494] detecting cgroup driver to use...
	I0507 18:33:56.212937    8396 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0507 18:33:56.263768    8396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0507 18:33:56.293164    8396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0507 18:33:56.312005    8396 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0507 18:33:56.323112    8396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0507 18:33:56.352226    8396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0507 18:33:56.383447    8396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0507 18:33:56.410638    8396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0507 18:33:56.438750    8396 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0507 18:33:56.465337    8396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0507 18:33:56.497084    8396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0507 18:33:56.524064    8396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0507 18:33:56.555335    8396 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0507 18:33:56.579709    8396 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0507 18:33:56.603700    8396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 18:33:56.803752    8396 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0507 18:33:56.830461    8396 start.go:494] detecting cgroup driver to use...
	I0507 18:33:56.841791    8396 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0507 18:33:56.872976    8396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0507 18:33:56.902669    8396 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0507 18:33:56.946818    8396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0507 18:33:56.982105    8396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0507 18:33:57.014666    8396 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0507 18:33:57.076148    8396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0507 18:33:57.099890    8396 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0507 18:33:57.140585    8396 ssh_runner.go:195] Run: which cri-dockerd
	I0507 18:33:57.155485    8396 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0507 18:33:57.172359    8396 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0507 18:33:57.210978    8396 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0507 18:33:57.402887    8396 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0507 18:33:57.568918    8396 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0507 18:33:57.569264    8396 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0507 18:33:57.608281    8396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 18:33:57.786235    8396 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0507 18:34:00.287567    8396 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5011606s)
	I0507 18:34:00.302671    8396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0507 18:34:00.341568    8396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0507 18:34:00.376727    8396 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0507 18:34:00.559799    8396 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0507 18:34:00.741447    8396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 18:34:00.924723    8396 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0507 18:34:00.964793    8396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0507 18:34:00.998199    8396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 18:34:01.178832    8396 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0507 18:34:01.280841    8396 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0507 18:34:01.291060    8396 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0507 18:34:01.298536    8396 start.go:562] Will wait 60s for crictl version
	I0507 18:34:01.309109    8396 ssh_runner.go:195] Run: which crictl
	I0507 18:34:01.324588    8396 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0507 18:34:01.382841    8396 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0507 18:34:01.390260    8396 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0507 18:34:01.427836    8396 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0507 18:34:01.458548    8396 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0507 18:34:01.458548    8396 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0507 18:34:01.465200    8396 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0507 18:34:01.465200    8396 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0507 18:34:01.465200    8396 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0507 18:34:01.465200    8396 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:a3:a5:4f Flags:up|broadcast|multicast|running}
	I0507 18:34:01.467565    8396 ip.go:210] interface addr: fe80::1edb:f5fd:c218:d8d2/64
	I0507 18:34:01.467565    8396 ip.go:210] interface addr: 172.19.128.1/20
	I0507 18:34:01.475594    8396 ssh_runner.go:195] Run: grep 172.19.128.1	host.minikube.internal$ /etc/hosts
	I0507 18:34:01.481961    8396 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.128.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0507 18:34:01.514485    8396 kubeadm.go:877] updating cluster {Name:ha-210800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0
ClusterName:ha-210800 Namespace:default APIServerHAVIP:172.19.143.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.132.69 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0507 18:34:01.514485    8396 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0507 18:34:01.521486    8396 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0507 18:34:01.547082    8396 docker.go:685] Got preloaded images: 
	I0507 18:34:01.547156    8396 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.0 wasn't preloaded
	I0507 18:34:01.559292    8396 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0507 18:34:01.590071    8396 ssh_runner.go:195] Run: which lz4
	I0507 18:34:01.596074    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0507 18:34:01.604654    8396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0507 18:34:01.610591    8396 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0507 18:34:01.610591    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359556852 bytes)
	I0507 18:34:03.028291    8396 docker.go:649] duration metric: took 1.43166s to copy over tarball
	I0507 18:34:03.039375    8396 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0507 18:34:12.483258    8396 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (9.4430391s)
	I0507 18:34:12.483258    8396 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0507 18:34:12.542216    8396 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0507 18:34:12.559502    8396 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0507 18:34:12.603692    8396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 18:34:12.787347    8396 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0507 18:34:16.129136    8396 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.3415599s)
	I0507 18:34:16.137182    8396 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0507 18:34:16.159345    8396 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0507 18:34:16.159409    8396 cache_images.go:84] Images are preloaded, skipping loading
	I0507 18:34:16.159409    8396 kubeadm.go:928] updating node { 172.19.132.69 8443 v1.30.0 docker true true} ...
	I0507 18:34:16.159658    8396 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-210800 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.132.69
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-210800 Namespace:default APIServerHAVIP:172.19.143.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0507 18:34:16.166615    8396 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0507 18:34:16.198247    8396 cni.go:84] Creating CNI manager for ""
	I0507 18:34:16.198247    8396 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0507 18:34:16.198247    8396 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0507 18:34:16.198247    8396 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.19.132.69 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-210800 NodeName:ha-210800 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.19.132.69"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.19.132.69 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0507 18:34:16.198247    8396 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.19.132.69
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-210800"
	  kubeletExtraArgs:
	    node-ip: 172.19.132.69
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.19.132.69"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0507 18:34:16.198247    8396 kube-vip.go:111] generating kube-vip config ...
	I0507 18:34:16.207240    8396 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0507 18:34:16.230277    8396 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0507 18:34:16.231265    8396 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.19.143.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0507 18:34:16.242254    8396 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0507 18:34:16.264257    8396 binaries.go:44] Found k8s binaries, skipping transfer
	I0507 18:34:16.274246    8396 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0507 18:34:16.295434    8396 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0507 18:34:16.324449    8396 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0507 18:34:16.359021    8396 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0507 18:34:16.387727    8396 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1352 bytes)
	I0507 18:34:16.426454    8396 ssh_runner.go:195] Run: grep 172.19.143.254	control-plane.minikube.internal$ /etc/hosts
	I0507 18:34:16.432961    8396 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.143.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0507 18:34:16.459827    8396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 18:34:16.629800    8396 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0507 18:34:16.654738    8396 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800 for IP: 172.19.132.69
	I0507 18:34:16.654932    8396 certs.go:194] generating shared ca certs ...
	I0507 18:34:16.654932    8396 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 18:34:16.655753    8396 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0507 18:34:16.656180    8396 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0507 18:34:16.656382    8396 certs.go:256] generating profile certs ...
	I0507 18:34:16.657119    8396 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\client.key
	I0507 18:34:16.657238    8396 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\client.crt with IP's: []
	I0507 18:34:16.732052    8396 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\client.crt ...
	I0507 18:34:16.732052    8396 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\client.crt: {Name:mk59fbe227eecdee4ffc9752f8af7db1e6cae876 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 18:34:16.733685    8396 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\client.key ...
	I0507 18:34:16.733685    8396 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\client.key: {Name:mkc8e35621f7e8f0fa74ff63f98b71222545a7b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 18:34:16.735467    8396 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.key.6c1d5e03
	I0507 18:34:16.735467    8396 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.crt.6c1d5e03 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.19.132.69 172.19.143.254]
	I0507 18:34:16.992191    8396 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.crt.6c1d5e03 ...
	I0507 18:34:16.992191    8396 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.crt.6c1d5e03: {Name:mke632d2d15fa0eedb6c0c6aa4eefca3f13e4bd2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 18:34:16.994139    8396 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.key.6c1d5e03 ...
	I0507 18:34:16.994139    8396 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.key.6c1d5e03: {Name:mk023deb57a6234e869043d6d13dae2827f4a2e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 18:34:16.994576    8396 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.crt.6c1d5e03 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.crt
	I0507 18:34:17.007850    8396 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.key.6c1d5e03 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.key
	I0507 18:34:17.008722    8396 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\proxy-client.key
	I0507 18:34:17.008722    8396 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\proxy-client.crt with IP's: []
	I0507 18:34:17.383476    8396 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\proxy-client.crt ...
	I0507 18:34:17.383476    8396 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\proxy-client.crt: {Name:mk1a84aa147a934c266b8199690fcdbca720b9f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 18:34:17.385483    8396 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\proxy-client.key ...
	I0507 18:34:17.385483    8396 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\proxy-client.key: {Name:mkbc78be0b182612ff8178f9381e616ab597e2c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 18:34:17.386494    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0507 18:34:17.387486    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0507 18:34:17.387486    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0507 18:34:17.387486    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0507 18:34:17.387486    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0507 18:34:17.387486    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0507 18:34:17.387486    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0507 18:34:17.396480    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0507 18:34:17.397138    8396 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\9992.pem (1338 bytes)
	W0507 18:34:17.397534    8396 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\9992_empty.pem, impossibly tiny 0 bytes
	I0507 18:34:17.397534    8396 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0507 18:34:17.397860    8396 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0507 18:34:17.398083    8396 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0507 18:34:17.398083    8396 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0507 18:34:17.398500    8396 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem (1708 bytes)
	I0507 18:34:17.398747    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem -> /usr/share/ca-certificates/99922.pem
	I0507 18:34:17.398889    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0507 18:34:17.398889    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\9992.pem -> /usr/share/ca-certificates/9992.pem
	I0507 18:34:17.399510    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0507 18:34:17.447078    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0507 18:34:17.485076    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0507 18:34:17.522069    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0507 18:34:17.568219    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0507 18:34:17.613489    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0507 18:34:17.661087    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0507 18:34:17.700309    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0507 18:34:17.742476    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem --> /usr/share/ca-certificates/99922.pem (1708 bytes)
	I0507 18:34:17.787798    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0507 18:34:17.830594    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\9992.pem --> /usr/share/ca-certificates/9992.pem (1338 bytes)
	I0507 18:34:17.870851    8396 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0507 18:34:17.911076    8396 ssh_runner.go:195] Run: openssl version
	I0507 18:34:17.931697    8396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/99922.pem && ln -fs /usr/share/ca-certificates/99922.pem /etc/ssl/certs/99922.pem"
	I0507 18:34:17.961083    8396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/99922.pem
	I0507 18:34:17.968714    8396 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  7 18:15 /usr/share/ca-certificates/99922.pem
	I0507 18:34:17.981379    8396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/99922.pem
	I0507 18:34:18.001710    8396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/99922.pem /etc/ssl/certs/3ec20f2e.0"
	I0507 18:34:18.030922    8396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0507 18:34:18.061028    8396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0507 18:34:18.071443    8396 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  7 18:01 /usr/share/ca-certificates/minikubeCA.pem
	I0507 18:34:18.083945    8396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0507 18:34:18.098901    8396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0507 18:34:18.125427    8396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9992.pem && ln -fs /usr/share/ca-certificates/9992.pem /etc/ssl/certs/9992.pem"
	I0507 18:34:18.152762    8396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9992.pem
	I0507 18:34:18.159930    8396 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  7 18:15 /usr/share/ca-certificates/9992.pem
	I0507 18:34:18.168357    8396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9992.pem
	I0507 18:34:18.185917    8396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9992.pem /etc/ssl/certs/51391683.0"
	I0507 18:34:18.210269    8396 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0507 18:34:18.216286    8396 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0507 18:34:18.216938    8396 kubeadm.go:391] StartCluster: {Name:ha-210800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clu
sterName:ha-210800 Namespace:default APIServerHAVIP:172.19.143.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.132.69 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0507 18:34:18.223970    8396 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0507 18:34:18.257737    8396 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0507 18:34:18.285132    8396 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0507 18:34:18.308705    8396 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0507 18:34:18.324760    8396 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0507 18:34:18.324760    8396 kubeadm.go:156] found existing configuration files:
	
	I0507 18:34:18.334009    8396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0507 18:34:18.349398    8396 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0507 18:34:18.359781    8396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0507 18:34:18.385849    8396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0507 18:34:18.401809    8396 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0507 18:34:18.409495    8396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0507 18:34:18.437518    8396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0507 18:34:18.452800    8396 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0507 18:34:18.461825    8396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0507 18:34:18.487952    8396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0507 18:34:18.502201    8396 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0507 18:34:18.511217    8396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0507 18:34:18.527704    8396 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0507 18:34:18.853530    8396 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0507 18:34:31.670158    8396 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0507 18:34:31.670158    8396 kubeadm.go:309] [preflight] Running pre-flight checks
	I0507 18:34:31.670158    8396 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0507 18:34:31.671615    8396 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0507 18:34:31.671858    8396 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0507 18:34:31.671858    8396 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0507 18:34:31.674826    8396 out.go:204]   - Generating certificates and keys ...
	I0507 18:34:31.674973    8396 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0507 18:34:31.675121    8396 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0507 18:34:31.675512    8396 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0507 18:34:31.675512    8396 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0507 18:34:31.675512    8396 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0507 18:34:31.675512    8396 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0507 18:34:31.676042    8396 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0507 18:34:31.676359    8396 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-210800 localhost] and IPs [172.19.132.69 127.0.0.1 ::1]
	I0507 18:34:31.676496    8396 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0507 18:34:31.676544    8396 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-210800 localhost] and IPs [172.19.132.69 127.0.0.1 ::1]
	I0507 18:34:31.676544    8396 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0507 18:34:31.676544    8396 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0507 18:34:31.677078    8396 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0507 18:34:31.677172    8396 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0507 18:34:31.677505    8396 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0507 18:34:31.677505    8396 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0507 18:34:31.677505    8396 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0507 18:34:31.677505    8396 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0507 18:34:31.678122    8396 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0507 18:34:31.678122    8396 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0507 18:34:31.678122    8396 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0507 18:34:31.680943    8396 out.go:204]   - Booting up control plane ...
	I0507 18:34:31.681153    8396 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0507 18:34:31.681313    8396 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0507 18:34:31.681510    8396 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0507 18:34:31.681687    8396 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0507 18:34:31.681687    8396 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0507 18:34:31.681687    8396 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0507 18:34:31.682109    8396 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0507 18:34:31.682109    8396 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0507 18:34:31.682650    8396 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002741282s
	I0507 18:34:31.682909    8396 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0507 18:34:31.683166    8396 kubeadm.go:309] [api-check] The API server is healthy after 7.003581288s
	I0507 18:34:31.683468    8396 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0507 18:34:31.683756    8396 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0507 18:34:31.683930    8396 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0507 18:34:31.684258    8396 kubeadm.go:309] [mark-control-plane] Marking the node ha-210800 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0507 18:34:31.684313    8396 kubeadm.go:309] [bootstrap-token] Using token: wq75wp.g5obqxh3w2h2uzc4
	I0507 18:34:31.687258    8396 out.go:204]   - Configuring RBAC rules ...
	I0507 18:34:31.687324    8396 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0507 18:34:31.687324    8396 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0507 18:34:31.687959    8396 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0507 18:34:31.687959    8396 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0507 18:34:31.687959    8396 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0507 18:34:31.688606    8396 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0507 18:34:31.688606    8396 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0507 18:34:31.688606    8396 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0507 18:34:31.689191    8396 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0507 18:34:31.689191    8396 kubeadm.go:309] 
	I0507 18:34:31.689191    8396 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0507 18:34:31.689191    8396 kubeadm.go:309] 
	I0507 18:34:31.689191    8396 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0507 18:34:31.689191    8396 kubeadm.go:309] 
	I0507 18:34:31.689191    8396 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0507 18:34:31.689191    8396 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0507 18:34:31.689191    8396 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0507 18:34:31.689191    8396 kubeadm.go:309] 
	I0507 18:34:31.689191    8396 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0507 18:34:31.689191    8396 kubeadm.go:309] 
	I0507 18:34:31.689191    8396 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0507 18:34:31.689191    8396 kubeadm.go:309] 
	I0507 18:34:31.689191    8396 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0507 18:34:31.690193    8396 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0507 18:34:31.690193    8396 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0507 18:34:31.690193    8396 kubeadm.go:309] 
	I0507 18:34:31.690193    8396 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0507 18:34:31.690193    8396 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0507 18:34:31.690193    8396 kubeadm.go:309] 
	I0507 18:34:31.690193    8396 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token wq75wp.g5obqxh3w2h2uzc4 \
	I0507 18:34:31.690193    8396 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:931f752ca063cc161db9d00a66e1e235f9a673b9dc0e49228e9ec99d810de7b1 \
	I0507 18:34:31.691217    8396 kubeadm.go:309] 	--control-plane 
	I0507 18:34:31.691217    8396 kubeadm.go:309] 
	I0507 18:34:31.691217    8396 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0507 18:34:31.691217    8396 kubeadm.go:309] 
	I0507 18:34:31.691217    8396 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token wq75wp.g5obqxh3w2h2uzc4 \
	I0507 18:34:31.691795    8396 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:931f752ca063cc161db9d00a66e1e235f9a673b9dc0e49228e9ec99d810de7b1 
	I0507 18:34:31.691907    8396 cni.go:84] Creating CNI manager for ""
	I0507 18:34:31.691907    8396 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0507 18:34:31.694919    8396 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0507 18:34:31.704618    8396 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0507 18:34:31.712587    8396 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0507 18:34:31.712587    8396 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0507 18:34:31.760956    8396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0507 18:34:32.223224    8396 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0507 18:34:32.235001    8396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-210800 minikube.k8s.io/updated_at=2024_05_07T18_34_32_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=a2bee053733709aad5480b65159f65519e411d9f minikube.k8s.io/name=ha-210800 minikube.k8s.io/primary=true
	I0507 18:34:32.235550    8396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:34:32.249883    8396 ops.go:34] apiserver oom_adj: -16
	I0507 18:34:32.443977    8396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:34:32.953029    8396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:34:33.455184    8396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:34:33.954748    8396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:34:34.458483    8396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:34:34.960374    8396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:34:35.456448    8396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:34:35.949035    8396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:34:36.445521    8396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:34:36.953207    8396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:34:37.454167    8396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:34:37.957439    8396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:34:38.462131    8396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:34:38.959140    8396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:34:39.457870    8396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:34:39.944282    8396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:34:40.448077    8396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:34:40.944537    8396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:34:41.446618    8396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:34:41.951533    8396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:34:42.456184    8396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:34:42.958353    8396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:34:43.444040    8396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:34:43.946593    8396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:34:44.450533    8396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:34:44.626392    8396 kubeadm.go:1107] duration metric: took 12.4022566s to wait for elevateKubeSystemPrivileges
	W0507 18:34:44.626544    8396 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0507 18:34:44.626544    8396 kubeadm.go:393] duration metric: took 26.4077927s to StartCluster
	I0507 18:34:44.626544    8396 settings.go:142] acquiring lock: {Name:mk66ab2e0bae08b477c4ed9caa26e688e6ce3248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 18:34:44.626904    8396 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0507 18:34:44.627734    8396 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 18:34:44.629650    8396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0507 18:34:44.629650    8396 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.19.132.69 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0507 18:34:44.629650    8396 start.go:240] waiting for startup goroutines ...
	I0507 18:34:44.629650    8396 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0507 18:34:44.629650    8396 addons.go:69] Setting storage-provisioner=true in profile "ha-210800"
	I0507 18:34:44.629650    8396 addons.go:69] Setting default-storageclass=true in profile "ha-210800"
	I0507 18:34:44.629650    8396 addons.go:234] Setting addon storage-provisioner=true in "ha-210800"
	I0507 18:34:44.629650    8396 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-210800"
	I0507 18:34:44.630261    8396 host.go:66] Checking if "ha-210800" exists ...
	I0507 18:34:44.630261    8396 config.go:182] Loaded profile config "ha-210800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 18:34:44.631155    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800 ).state
	I0507 18:34:44.631802    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800 ).state
	I0507 18:34:44.790548    8396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.19.128.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0507 18:34:45.118544    8396 start.go:946] {"host.minikube.internal": 172.19.128.1} host record injected into CoreDNS's ConfigMap
	I0507 18:34:46.723769    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:34:46.724090    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:34:46.725054    8396 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0507 18:34:46.725302    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:34:46.725302    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:34:46.728032    8396 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0507 18:34:46.725479    8396 kapi.go:59] client config for ha-210800: &rest.Config{Host:"https://172.19.143.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-210800\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-210800\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2655b00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0507 18:34:46.729454    8396 cert_rotation.go:137] Starting client certificate rotation controller
	I0507 18:34:46.729870    8396 addons.go:234] Setting addon default-storageclass=true in "ha-210800"
	I0507 18:34:46.730386    8396 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0507 18:34:46.730386    8396 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0507 18:34:46.730386    8396 host.go:66] Checking if "ha-210800" exists ...
	I0507 18:34:46.730386    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800 ).state
	I0507 18:34:46.731457    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800 ).state
	I0507 18:34:48.808619    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:34:48.808619    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:34:48.808619    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800 ).networkadapters[0]).ipaddresses[0]
	I0507 18:34:48.873739    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:34:48.874544    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:34:48.874602    8396 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0507 18:34:48.874602    8396 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0507 18:34:48.874730    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800 ).state
	I0507 18:34:50.939859    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:34:50.939859    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:34:50.939859    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800 ).networkadapters[0]).ipaddresses[0]
	I0507 18:34:51.324481    8396 main.go:141] libmachine: [stdout =====>] : 172.19.132.69
	
	I0507 18:34:51.324481    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:34:51.324962    8396 sshutil.go:53] new ssh client: &{IP:172.19.132.69 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800\id_rsa Username:docker}
	I0507 18:34:51.489552    8396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0507 18:34:53.300330    8396 main.go:141] libmachine: [stdout =====>] : 172.19.132.69
	
	I0507 18:34:53.301103    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:34:53.301393    8396 sshutil.go:53] new ssh client: &{IP:172.19.132.69 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800\id_rsa Username:docker}
	I0507 18:34:53.448203    8396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0507 18:34:53.588179    8396 round_trippers.go:463] GET https://172.19.143.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0507 18:34:53.588179    8396 round_trippers.go:469] Request Headers:
	I0507 18:34:53.588179    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:34:53.588293    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:34:53.599611    8396 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0507 18:34:53.601332    8396 round_trippers.go:463] PUT https://172.19.143.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0507 18:34:53.601390    8396 round_trippers.go:469] Request Headers:
	I0507 18:34:53.601390    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:34:53.601390    8396 round_trippers.go:473]     Content-Type: application/json
	I0507 18:34:53.601390    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:34:53.609600    8396 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0507 18:34:53.611561    8396 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0507 18:34:53.616552    8396 addons.go:505] duration metric: took 8.9862837s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0507 18:34:53.616552    8396 start.go:245] waiting for cluster config update ...
	I0507 18:34:53.616552    8396 start.go:254] writing updated cluster config ...
	I0507 18:34:53.618565    8396 out.go:177] 
	I0507 18:34:53.628560    8396 config.go:182] Loaded profile config "ha-210800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 18:34:53.629550    8396 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\config.json ...
	I0507 18:34:53.633562    8396 out.go:177] * Starting "ha-210800-m02" control-plane node in "ha-210800" cluster
	I0507 18:34:53.637573    8396 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0507 18:34:53.638009    8396 cache.go:56] Caching tarball of preloaded images
	I0507 18:34:53.638009    8396 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0507 18:34:53.638009    8396 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0507 18:34:53.638009    8396 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\config.json ...
	I0507 18:34:53.642353    8396 start.go:360] acquireMachinesLock for ha-210800-m02: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 18:34:53.642466    8396 start.go:364] duration metric: took 56.5µs to acquireMachinesLock for "ha-210800-m02"
	I0507 18:34:53.642466    8396 start.go:93] Provisioning new machine with config: &{Name:ha-210800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.0 ClusterName:ha-210800 Namespace:default APIServerHAVIP:172.19.143.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.132.69 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0507 18:34:53.642466    8396 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0507 18:34:53.647503    8396 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0507 18:34:53.648256    8396 start.go:159] libmachine.API.Create for "ha-210800" (driver="hyperv")
	I0507 18:34:53.648256    8396 client.go:168] LocalClient.Create starting
	I0507 18:34:53.648256    8396 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0507 18:34:53.648909    8396 main.go:141] libmachine: Decoding PEM data...
	I0507 18:34:53.648909    8396 main.go:141] libmachine: Parsing certificate...
	I0507 18:34:53.649026    8396 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0507 18:34:53.649274    8396 main.go:141] libmachine: Decoding PEM data...
	I0507 18:34:53.649274    8396 main.go:141] libmachine: Parsing certificate...
	I0507 18:34:53.649423    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0507 18:34:55.302176    8396 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0507 18:34:55.302176    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:34:55.302176    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0507 18:34:56.875180    8396 main.go:141] libmachine: [stdout =====>] : False
	
	I0507 18:34:56.875180    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:34:56.875886    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0507 18:34:58.251062    8396 main.go:141] libmachine: [stdout =====>] : True
	
	I0507 18:34:58.251062    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:34:58.251062    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0507 18:35:01.464890    8396 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0507 18:35:01.464890    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:35:01.467634    8396 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso...
	I0507 18:35:01.799387    8396 main.go:141] libmachine: Creating SSH key...
	I0507 18:35:01.995893    8396 main.go:141] libmachine: Creating VM...
	I0507 18:35:01.995893    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0507 18:35:04.522097    8396 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0507 18:35:04.522193    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:35:04.522268    8396 main.go:141] libmachine: Using switch "Default Switch"
	I0507 18:35:04.522383    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0507 18:35:06.118344    8396 main.go:141] libmachine: [stdout =====>] : True
	
	I0507 18:35:06.118344    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:35:06.118429    8396 main.go:141] libmachine: Creating VHD
	I0507 18:35:06.118547    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0507 18:35:09.615833    8396 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 1950C92D-8A1C-4003-BE25-8D22A31CD17E
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0507 18:35:09.615995    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:35:09.615995    8396 main.go:141] libmachine: Writing magic tar header
	I0507 18:35:09.616065    8396 main.go:141] libmachine: Writing SSH key tar header
	I0507 18:35:09.623873    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0507 18:35:12.572870    8396 main.go:141] libmachine: [stdout =====>] : 
	I0507 18:35:12.573110    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:35:12.573110    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800-m02\disk.vhd' -SizeBytes 20000MB
	I0507 18:35:14.901209    8396 main.go:141] libmachine: [stdout =====>] : 
	I0507 18:35:14.901209    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:35:14.901356    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-210800-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0507 18:35:18.087239    8396 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-210800-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0507 18:35:18.087239    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:35:18.087239    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-210800-m02 -DynamicMemoryEnabled $false
	I0507 18:35:20.055236    8396 main.go:141] libmachine: [stdout =====>] : 
	I0507 18:35:20.055236    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:35:20.056282    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-210800-m02 -Count 2
	I0507 18:35:22.026297    8396 main.go:141] libmachine: [stdout =====>] : 
	I0507 18:35:22.026297    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:35:22.026588    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-210800-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800-m02\boot2docker.iso'
	I0507 18:35:24.346541    8396 main.go:141] libmachine: [stdout =====>] : 
	I0507 18:35:24.346541    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:35:24.346541    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-210800-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800-m02\disk.vhd'
	I0507 18:35:26.735238    8396 main.go:141] libmachine: [stdout =====>] : 
	I0507 18:35:26.735294    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:35:26.735294    8396 main.go:141] libmachine: Starting VM...
	I0507 18:35:26.735294    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-210800-m02
	I0507 18:35:29.528101    8396 main.go:141] libmachine: [stdout =====>] : 
	I0507 18:35:29.528101    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:35:29.528101    8396 main.go:141] libmachine: Waiting for host to start...
	I0507 18:35:29.528972    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
	I0507 18:35:31.570711    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:35:31.570743    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:35:31.570796    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 18:35:33.845737    8396 main.go:141] libmachine: [stdout =====>] : 
	I0507 18:35:33.845737    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:35:34.853209    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
	I0507 18:35:36.824586    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:35:36.824586    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:35:36.824790    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 18:35:39.071254    8396 main.go:141] libmachine: [stdout =====>] : 
	I0507 18:35:39.071254    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:35:40.077071    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
	I0507 18:35:42.046289    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:35:42.046289    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:35:42.046399    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 18:35:44.288312    8396 main.go:141] libmachine: [stdout =====>] : 
	I0507 18:35:44.288466    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:35:45.289093    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
	I0507 18:35:47.242202    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:35:47.242202    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:35:47.242202    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 18:35:49.498242    8396 main.go:141] libmachine: [stdout =====>] : 
	I0507 18:35:49.498242    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:35:50.499395    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
	I0507 18:35:52.478319    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:35:52.478319    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:35:52.478319    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 18:35:54.785362    8396 main.go:141] libmachine: [stdout =====>] : 172.19.135.87
	
	I0507 18:35:54.785362    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:35:54.785517    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
	I0507 18:35:56.701849    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:35:56.702148    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:35:56.702148    8396 machine.go:94] provisionDockerMachine start ...
	I0507 18:35:56.702313    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
	I0507 18:35:58.633010    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:35:58.633010    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:35:58.633098    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 18:36:00.900889    8396 main.go:141] libmachine: [stdout =====>] : 172.19.135.87
	
	I0507 18:36:00.900889    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:36:00.905979    8396 main.go:141] libmachine: Using SSH client type: native
	I0507 18:36:00.916629    8396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.135.87 22 <nil> <nil>}
	I0507 18:36:00.916629    8396 main.go:141] libmachine: About to run SSH command:
	hostname
	I0507 18:36:01.047128    8396 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0507 18:36:01.047128    8396 buildroot.go:166] provisioning hostname "ha-210800-m02"
	I0507 18:36:01.047128    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
	I0507 18:36:03.019255    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:36:03.019255    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:36:03.019255    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 18:36:05.386249    8396 main.go:141] libmachine: [stdout =====>] : 172.19.135.87
	
	I0507 18:36:05.386249    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:36:05.390937    8396 main.go:141] libmachine: Using SSH client type: native
	I0507 18:36:05.391216    8396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.135.87 22 <nil> <nil>}
	I0507 18:36:05.391216    8396 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-210800-m02 && echo "ha-210800-m02" | sudo tee /etc/hostname
	I0507 18:36:05.540314    8396 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-210800-m02
	
	I0507 18:36:05.540424    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
	I0507 18:36:07.498257    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:36:07.498317    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:36:07.498317    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 18:36:09.814825    8396 main.go:141] libmachine: [stdout =====>] : 172.19.135.87
	
	I0507 18:36:09.814825    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:36:09.822974    8396 main.go:141] libmachine: Using SSH client type: native
	I0507 18:36:09.823234    8396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.135.87 22 <nil> <nil>}
	I0507 18:36:09.823234    8396 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-210800-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-210800-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-210800-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0507 18:36:09.967390    8396 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0507 18:36:09.967390    8396 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0507 18:36:09.967390    8396 buildroot.go:174] setting up certificates
	I0507 18:36:09.967390    8396 provision.go:84] configureAuth start
	I0507 18:36:09.967390    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
	I0507 18:36:11.900723    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:36:11.900723    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:36:11.900723    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 18:36:14.162863    8396 main.go:141] libmachine: [stdout =====>] : 172.19.135.87
	
	I0507 18:36:14.162984    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:36:14.162984    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
	I0507 18:36:16.063190    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:36:16.063190    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:36:16.063190    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 18:36:18.401243    8396 main.go:141] libmachine: [stdout =====>] : 172.19.135.87
	
	I0507 18:36:18.401243    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:36:18.401243    8396 provision.go:143] copyHostCerts
	I0507 18:36:18.401243    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0507 18:36:18.401243    8396 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0507 18:36:18.401243    8396 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0507 18:36:18.401853    8396 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0507 18:36:18.402476    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0507 18:36:18.402476    8396 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0507 18:36:18.402476    8396 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0507 18:36:18.402476    8396 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0507 18:36:18.403816    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0507 18:36:18.403816    8396 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0507 18:36:18.403816    8396 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0507 18:36:18.403816    8396 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0507 18:36:18.404487    8396 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-210800-m02 san=[127.0.0.1 172.19.135.87 ha-210800-m02 localhost minikube]
	I0507 18:36:18.717435    8396 provision.go:177] copyRemoteCerts
	I0507 18:36:18.725073    8396 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0507 18:36:18.725073    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
	I0507 18:36:20.663851    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:36:20.664496    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:36:20.664496    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 18:36:22.996756    8396 main.go:141] libmachine: [stdout =====>] : 172.19.135.87
	
	I0507 18:36:22.996756    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:36:22.997304    8396 sshutil.go:53] new ssh client: &{IP:172.19.135.87 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800-m02\id_rsa Username:docker}
	I0507 18:36:23.100348    8396 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.3749722s)
	I0507 18:36:23.100348    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0507 18:36:23.100348    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0507 18:36:23.145451    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0507 18:36:23.146414    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0507 18:36:23.198247    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0507 18:36:23.198247    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0507 18:36:23.242848    8396 provision.go:87] duration metric: took 13.2745419s to configureAuth
	I0507 18:36:23.242848    8396 buildroot.go:189] setting minikube options for container-runtime
	I0507 18:36:23.243467    8396 config.go:182] Loaded profile config "ha-210800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 18:36:23.243467    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
	I0507 18:36:25.184681    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:36:25.184681    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:36:25.185320    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 18:36:27.491096    8396 main.go:141] libmachine: [stdout =====>] : 172.19.135.87
	
	I0507 18:36:27.491169    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:36:27.496471    8396 main.go:141] libmachine: Using SSH client type: native
	I0507 18:36:27.496471    8396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.135.87 22 <nil> <nil>}
	I0507 18:36:27.496471    8396 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0507 18:36:27.624201    8396 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0507 18:36:27.624248    8396 buildroot.go:70] root file system type: tmpfs
	I0507 18:36:27.624248    8396 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0507 18:36:27.624248    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
	I0507 18:36:29.511759    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:36:29.511759    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:36:29.511759    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 18:36:31.785015    8396 main.go:141] libmachine: [stdout =====>] : 172.19.135.87
	
	I0507 18:36:31.785779    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:36:31.789385    8396 main.go:141] libmachine: Using SSH client type: native
	I0507 18:36:31.789385    8396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.135.87 22 <nil> <nil>}
	I0507 18:36:31.790011    8396 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.19.132.69"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0507 18:36:31.957966    8396 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.19.132.69
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0507 18:36:31.958091    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
	I0507 18:36:33.813960    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:36:33.814405    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:36:33.814454    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 18:36:36.087126    8396 main.go:141] libmachine: [stdout =====>] : 172.19.135.87
	
	I0507 18:36:36.087126    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:36:36.091729    8396 main.go:141] libmachine: Using SSH client type: native
	I0507 18:36:36.092253    8396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.135.87 22 <nil> <nil>}
	I0507 18:36:36.092253    8396 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0507 18:36:38.165921    8396 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0507 18:36:38.165921    8396 machine.go:97] duration metric: took 41.4608334s to provisionDockerMachine
	I0507 18:36:38.166025    8396 client.go:171] duration metric: took 1m44.5104599s to LocalClient.Create
	I0507 18:36:38.166025    8396 start.go:167] duration metric: took 1m44.5105636s to libmachine.API.Create "ha-210800"
	I0507 18:36:38.166025    8396 start.go:293] postStartSetup for "ha-210800-m02" (driver="hyperv")
	I0507 18:36:38.166025    8396 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0507 18:36:38.174369    8396 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0507 18:36:38.175374    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
	I0507 18:36:40.063971    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:36:40.064139    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:36:40.064139    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 18:36:42.346137    8396 main.go:141] libmachine: [stdout =====>] : 172.19.135.87
	
	I0507 18:36:42.346354    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:36:42.346732    8396 sshutil.go:53] new ssh client: &{IP:172.19.135.87 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800-m02\id_rsa Username:docker}
	I0507 18:36:42.456065    8396 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.2814012s)
	I0507 18:36:42.467287    8396 ssh_runner.go:195] Run: cat /etc/os-release
	I0507 18:36:42.473746    8396 info.go:137] Remote host: Buildroot 2023.02.9
	I0507 18:36:42.473746    8396 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0507 18:36:42.473746    8396 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0507 18:36:42.473746    8396 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem -> 99922.pem in /etc/ssl/certs
	I0507 18:36:42.473746    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem -> /etc/ssl/certs/99922.pem
	I0507 18:36:42.483093    8396 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0507 18:36:42.504052    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem --> /etc/ssl/certs/99922.pem (1708 bytes)
	I0507 18:36:42.547781    8396 start.go:296] duration metric: took 4.3814539s for postStartSetup
	I0507 18:36:42.549725    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
	I0507 18:36:44.454070    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:36:44.454070    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:36:44.455094    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 18:36:46.707416    8396 main.go:141] libmachine: [stdout =====>] : 172.19.135.87
	
	I0507 18:36:46.708357    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:36:46.708357    8396 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\config.json ...
	I0507 18:36:46.709511    8396 start.go:128] duration metric: took 1m53.0592505s to createHost
	I0507 18:36:46.710107    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
	I0507 18:36:48.586410    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:36:48.587408    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:36:48.587520    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 18:36:50.829564    8396 main.go:141] libmachine: [stdout =====>] : 172.19.135.87
	
	I0507 18:36:50.829564    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:36:50.834093    8396 main.go:141] libmachine: Using SSH client type: native
	I0507 18:36:50.834620    8396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.135.87 22 <nil> <nil>}
	I0507 18:36:50.834620    8396 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0507 18:36:50.959427    8396 main.go:141] libmachine: SSH cmd err, output: <nil>: 1715107011.165850792
	
	I0507 18:36:50.959427    8396 fix.go:216] guest clock: 1715107011.165850792
	I0507 18:36:50.959427    8396 fix.go:229] Guest: 2024-05-07 18:36:51.165850792 +0000 UTC Remote: 2024-05-07 18:36:46.710028 +0000 UTC m=+306.542564401 (delta=4.455822792s)
	I0507 18:36:50.959427    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
	I0507 18:36:52.848185    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:36:52.848185    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:36:52.848185    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 18:36:55.122226    8396 main.go:141] libmachine: [stdout =====>] : 172.19.135.87
	
	I0507 18:36:55.122226    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:36:55.126779    8396 main.go:141] libmachine: Using SSH client type: native
	I0507 18:36:55.127304    8396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.135.87 22 <nil> <nil>}
	I0507 18:36:55.127341    8396 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1715107010
	I0507 18:36:55.259970    8396 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue May  7 18:36:50 UTC 2024
	
	I0507 18:36:55.259970    8396 fix.go:236] clock set: Tue May  7 18:36:50 UTC 2024
	 (err=<nil>)
	I0507 18:36:55.259970    8396 start.go:83] releasing machines lock for "ha-210800-m02", held for 2m1.60912s
	I0507 18:36:55.260991    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
	I0507 18:36:57.177701    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:36:57.178380    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:36:57.178491    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 18:36:59.484964    8396 main.go:141] libmachine: [stdout =====>] : 172.19.135.87
	
	I0507 18:36:59.485453    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:36:59.499630    8396 out.go:177] * Found network options:
	I0507 18:36:59.503173    8396 out.go:177]   - NO_PROXY=172.19.132.69
	W0507 18:36:59.505237    8396 proxy.go:119] fail to check proxy env: Error ip not in block
	I0507 18:36:59.507381    8396 out.go:177]   - NO_PROXY=172.19.132.69
	W0507 18:36:59.510059    8396 proxy.go:119] fail to check proxy env: Error ip not in block
	W0507 18:36:59.511746    8396 proxy.go:119] fail to check proxy env: Error ip not in block
	I0507 18:36:59.513647    8396 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0507 18:36:59.513792    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
	I0507 18:36:59.521577    8396 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0507 18:36:59.521577    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
	I0507 18:37:01.482424    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:37:01.482424    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:37:01.482511    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 18:37:01.510612    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:37:01.510612    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:37:01.510612    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 18:37:03.867682    8396 main.go:141] libmachine: [stdout =====>] : 172.19.135.87
	
	I0507 18:37:03.867682    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:37:03.868908    8396 sshutil.go:53] new ssh client: &{IP:172.19.135.87 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800-m02\id_rsa Username:docker}
	I0507 18:37:03.891314    8396 main.go:141] libmachine: [stdout =====>] : 172.19.135.87
	
	I0507 18:37:03.892357    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:37:03.892682    8396 sshutil.go:53] new ssh client: &{IP:172.19.135.87 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800-m02\id_rsa Username:docker}
	I0507 18:37:04.043891    8396 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.5219212s)
	I0507 18:37:04.043891    8396 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.5298603s)
	W0507 18:37:04.043969    8396 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0507 18:37:04.053902    8396 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0507 18:37:04.082542    8396 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0507 18:37:04.082670    8396 start.go:494] detecting cgroup driver to use...
	I0507 18:37:04.082954    8396 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0507 18:37:04.126289    8396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0507 18:37:04.153715    8396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0507 18:37:04.172629    8396 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0507 18:37:04.181737    8396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0507 18:37:04.207003    8396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0507 18:37:04.233343    8396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0507 18:37:04.259767    8396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0507 18:37:04.287222    8396 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0507 18:37:04.314043    8396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0507 18:37:04.340933    8396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0507 18:37:04.369680    8396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0507 18:37:04.397010    8396 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0507 18:37:04.421308    8396 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0507 18:37:04.445548    8396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 18:37:04.629394    8396 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0507 18:37:04.664712    8396 start.go:494] detecting cgroup driver to use...
	I0507 18:37:04.678930    8396 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0507 18:37:04.716909    8396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0507 18:37:04.747292    8396 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0507 18:37:04.784326    8396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0507 18:37:04.816252    8396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0507 18:37:04.847205    8396 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0507 18:37:04.899432    8396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0507 18:37:04.921082    8396 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0507 18:37:04.965108    8396 ssh_runner.go:195] Run: which cri-dockerd
	I0507 18:37:04.979458    8396 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0507 18:37:04.997174    8396 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0507 18:37:05.035246    8396 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0507 18:37:05.224435    8396 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0507 18:37:05.411129    8396 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0507 18:37:05.411459    8396 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0507 18:37:05.451260    8396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 18:37:05.638005    8396 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0507 18:37:08.129421    8396 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.49117s)
	I0507 18:37:08.140666    8396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0507 18:37:08.169675    8396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0507 18:37:08.206191    8396 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0507 18:37:08.387393    8396 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0507 18:37:08.568759    8396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 18:37:08.751042    8396 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0507 18:37:08.788382    8396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0507 18:37:08.820363    8396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 18:37:09.003468    8396 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0507 18:37:09.098218    8396 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0507 18:37:09.105844    8396 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0507 18:37:09.114636    8396 start.go:562] Will wait 60s for crictl version
	I0507 18:37:09.122774    8396 ssh_runner.go:195] Run: which crictl
	I0507 18:37:09.137755    8396 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0507 18:37:09.187114    8396 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0507 18:37:09.193910    8396 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0507 18:37:09.227461    8396 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0507 18:37:09.257597    8396 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0507 18:37:09.260632    8396 out.go:177]   - env NO_PROXY=172.19.132.69
	I0507 18:37:09.262597    8396 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0507 18:37:09.266594    8396 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0507 18:37:09.266594    8396 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0507 18:37:09.266594    8396 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0507 18:37:09.266594    8396 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:a3:a5:4f Flags:up|broadcast|multicast|running}
	I0507 18:37:09.269601    8396 ip.go:210] interface addr: fe80::1edb:f5fd:c218:d8d2/64
	I0507 18:37:09.269601    8396 ip.go:210] interface addr: 172.19.128.1/20
	I0507 18:37:09.277610    8396 ssh_runner.go:195] Run: grep 172.19.128.1	host.minikube.internal$ /etc/hosts
	I0507 18:37:09.284239    8396 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.128.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0507 18:37:09.307355    8396 mustload.go:65] Loading cluster: ha-210800
	I0507 18:37:09.307920    8396 config.go:182] Loaded profile config "ha-210800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 18:37:09.308395    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800 ).state
	I0507 18:37:11.165135    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:37:11.165891    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:37:11.165891    8396 host.go:66] Checking if "ha-210800" exists ...
	I0507 18:37:11.166448    8396 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800 for IP: 172.19.135.87
	I0507 18:37:11.166448    8396 certs.go:194] generating shared ca certs ...
	I0507 18:37:11.166521    8396 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 18:37:11.167120    8396 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0507 18:37:11.167502    8396 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0507 18:37:11.167694    8396 certs.go:256] generating profile certs ...
	I0507 18:37:11.168199    8396 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\client.key
	I0507 18:37:11.168333    8396 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.key.8baf5605
	I0507 18:37:11.168399    8396 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.crt.8baf5605 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.19.132.69 172.19.135.87 172.19.143.254]
	I0507 18:37:11.318887    8396 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.crt.8baf5605 ...
	I0507 18:37:11.318887    8396 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.crt.8baf5605: {Name:mk35e8980a1be180b9dd44f1c2ba2dbe349f4b0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 18:37:11.320502    8396 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.key.8baf5605 ...
	I0507 18:37:11.320502    8396 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.key.8baf5605: {Name:mk357a8d7b50038f91b10e63854b4690ca652ef1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 18:37:11.321286    8396 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.crt.8baf5605 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.crt
	I0507 18:37:11.333498    8396 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.key.8baf5605 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.key
	I0507 18:37:11.336089    8396 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\proxy-client.key
	I0507 18:37:11.336089    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0507 18:37:11.336089    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0507 18:37:11.336089    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0507 18:37:11.336089    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0507 18:37:11.336089    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0507 18:37:11.336089    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0507 18:37:11.336089    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0507 18:37:11.336089    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0507 18:37:11.337616    8396 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\9992.pem (1338 bytes)
	W0507 18:37:11.337999    8396 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\9992_empty.pem, impossibly tiny 0 bytes
	I0507 18:37:11.337999    8396 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0507 18:37:11.338225    8396 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0507 18:37:11.338613    8396 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0507 18:37:11.338843    8396 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0507 18:37:11.339433    8396 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem (1708 bytes)
	I0507 18:37:11.339627    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\9992.pem -> /usr/share/ca-certificates/9992.pem
	I0507 18:37:11.339825    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem -> /usr/share/ca-certificates/99922.pem
	I0507 18:37:11.339942    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0507 18:37:11.340274    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800 ).state
	I0507 18:37:13.210015    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:37:13.210436    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:37:13.210532    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800 ).networkadapters[0]).ipaddresses[0]
	I0507 18:37:15.557119    8396 main.go:141] libmachine: [stdout =====>] : 172.19.132.69
	
	I0507 18:37:15.557756    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:37:15.557818    8396 sshutil.go:53] new ssh client: &{IP:172.19.132.69 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800\id_rsa Username:docker}
	I0507 18:37:15.655043    8396 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0507 18:37:15.663299    8396 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0507 18:37:15.695500    8396 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0507 18:37:15.703402    8396 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0507 18:37:15.734444    8396 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0507 18:37:15.741330    8396 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0507 18:37:15.774955    8396 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0507 18:37:15.781555    8396 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0507 18:37:15.807296    8396 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0507 18:37:15.813150    8396 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0507 18:37:15.840423    8396 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0507 18:37:15.847078    8396 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0507 18:37:15.866708    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0507 18:37:15.921119    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0507 18:37:15.963844    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0507 18:37:16.005197    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0507 18:37:16.048372    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0507 18:37:16.091763    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0507 18:37:16.133774    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0507 18:37:16.176062    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0507 18:37:16.217463    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\9992.pem --> /usr/share/ca-certificates/9992.pem (1338 bytes)
	I0507 18:37:16.259673    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem --> /usr/share/ca-certificates/99922.pem (1708 bytes)
	I0507 18:37:16.301502    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0507 18:37:16.343924    8396 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0507 18:37:16.373370    8396 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0507 18:37:16.402362    8396 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0507 18:37:16.436385    8396 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0507 18:37:16.463732    8396 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0507 18:37:16.492226    8396 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0507 18:37:16.524147    8396 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0507 18:37:16.564003    8396 ssh_runner.go:195] Run: openssl version
	I0507 18:37:16.580874    8396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9992.pem && ln -fs /usr/share/ca-certificates/9992.pem /etc/ssl/certs/9992.pem"
	I0507 18:37:16.607027    8396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9992.pem
	I0507 18:37:16.612868    8396 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  7 18:15 /usr/share/ca-certificates/9992.pem
	I0507 18:37:16.623865    8396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9992.pem
	I0507 18:37:16.640722    8396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9992.pem /etc/ssl/certs/51391683.0"
	I0507 18:37:16.668111    8396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/99922.pem && ln -fs /usr/share/ca-certificates/99922.pem /etc/ssl/certs/99922.pem"
	I0507 18:37:16.698614    8396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/99922.pem
	I0507 18:37:16.705594    8396 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  7 18:15 /usr/share/ca-certificates/99922.pem
	I0507 18:37:16.716265    8396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/99922.pem
	I0507 18:37:16.732438    8396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/99922.pem /etc/ssl/certs/3ec20f2e.0"
	I0507 18:37:16.758374    8396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0507 18:37:16.786075    8396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0507 18:37:16.793262    8396 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  7 18:01 /usr/share/ca-certificates/minikubeCA.pem
	I0507 18:37:16.803030    8396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0507 18:37:16.817966    8396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0507 18:37:16.844006    8396 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0507 18:37:16.851097    8396 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0507 18:37:16.851097    8396 kubeadm.go:928] updating node {m02 172.19.135.87 8443 v1.30.0 docker true true} ...
	I0507 18:37:16.851097    8396 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-210800-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.135.87
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-210800 Namespace:default APIServerHAVIP:172.19.143.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0507 18:37:16.851628    8396 kube-vip.go:111] generating kube-vip config ...
	I0507 18:37:16.861303    8396 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0507 18:37:16.886603    8396 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0507 18:37:16.886752    8396 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.19.143.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0507 18:37:16.895923    8396 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0507 18:37:16.912700    8396 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0507 18:37:16.921310    8396 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0507 18:37:16.941484    8396 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm
	I0507 18:37:16.941691    8396 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet
	I0507 18:37:16.941795    8396 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl
	I0507 18:37:17.971653    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0507 18:37:17.980284    8396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0507 18:37:17.988204    8396 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0507 18:37:17.988407    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0507 18:37:18.592076    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0507 18:37:18.600906    8396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0507 18:37:18.607343    8396 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0507 18:37:18.608345    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0507 18:37:19.351886    8396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0507 18:37:19.375900    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0507 18:37:19.384993    8396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0507 18:37:19.391883    8396 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0507 18:37:19.392115    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0507 18:37:19.991339    8396 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0507 18:37:20.007259    8396 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0507 18:37:20.035352    8396 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0507 18:37:20.064679    8396 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0507 18:37:20.105207    8396 ssh_runner.go:195] Run: grep 172.19.143.254	control-plane.minikube.internal$ /etc/hosts
	I0507 18:37:20.112978    8396 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.143.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0507 18:37:20.142887    8396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 18:37:20.327605    8396 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0507 18:37:20.490146    8396 host.go:66] Checking if "ha-210800" exists ...
	I0507 18:37:20.498513    8396 start.go:316] joinCluster: &{Name:ha-210800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clust
erName:ha-210800 Namespace:default APIServerHAVIP:172.19.143.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.132.69 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.135.87 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExp
iration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0507 18:37:20.498513    8396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0507 18:37:20.498513    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800 ).state
	I0507 18:37:22.406803    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:37:22.406803    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:37:22.406999    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800 ).networkadapters[0]).ipaddresses[0]
	I0507 18:37:24.714235    8396 main.go:141] libmachine: [stdout =====>] : 172.19.132.69
	
	I0507 18:37:24.714309    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:37:24.714654    8396 sshutil.go:53] new ssh client: &{IP:172.19.132.69 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800\id_rsa Username:docker}
	I0507 18:37:24.917515    8396 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0": (4.4186984s)
	I0507 18:37:24.917613    8396 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:172.19.135.87 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0507 18:37:24.917613    8396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token wju3ow.zru46704qlro3ubh --discovery-token-ca-cert-hash sha256:931f752ca063cc161db9d00a66e1e235f9a673b9dc0e49228e9ec99d810de7b1 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-210800-m02 --control-plane --apiserver-advertise-address=172.19.135.87 --apiserver-bind-port=8443"
	I0507 18:38:05.735195    8396 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token wju3ow.zru46704qlro3ubh --discovery-token-ca-cert-hash sha256:931f752ca063cc161db9d00a66e1e235f9a673b9dc0e49228e9ec99d810de7b1 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-210800-m02 --control-plane --apiserver-advertise-address=172.19.135.87 --apiserver-bind-port=8443": (40.8147749s)
	I0507 18:38:05.735195    8396 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0507 18:38:06.486435    8396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-210800-m02 minikube.k8s.io/updated_at=2024_05_07T18_38_06_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=a2bee053733709aad5480b65159f65519e411d9f minikube.k8s.io/name=ha-210800 minikube.k8s.io/primary=false
	I0507 18:38:06.660088    8396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-210800-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0507 18:38:06.809499    8396 start.go:318] duration metric: took 46.3078008s to joinCluster
	I0507 18:38:06.809697    8396 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.19.135.87 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0507 18:38:06.812704    8396 out.go:177] * Verifying Kubernetes components...
	I0507 18:38:06.810543    8396 config.go:182] Loaded profile config "ha-210800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 18:38:06.823813    8396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 18:38:07.164428    8396 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0507 18:38:07.211420    8396 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0507 18:38:07.211420    8396 kapi.go:59] client config for ha-210800: &rest.Config{Host:"https://172.19.143.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-210800\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-210800\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2655b00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0507 18:38:07.211420    8396 kubeadm.go:477] Overriding stale ClientConfig host https://172.19.143.254:8443 with https://172.19.132.69:8443
	I0507 18:38:07.212429    8396 node_ready.go:35] waiting up to 6m0s for node "ha-210800-m02" to be "Ready" ...
	I0507 18:38:07.212429    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:07.212429    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:07.212429    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:07.212429    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:07.236261    8396 round_trippers.go:574] Response Status: 200 OK in 23 milliseconds
	I0507 18:38:07.712907    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:07.713063    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:07.713063    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:07.713063    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:08.062513    8396 round_trippers.go:574] Response Status: 200 OK in 349 milliseconds
	I0507 18:38:08.220311    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:08.220530    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:08.220530    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:08.220530    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:08.226332    8396 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 18:38:08.714864    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:08.714944    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:08.714944    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:08.714944    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:08.719950    8396 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 18:38:09.222641    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:09.222641    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:09.222641    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:09.222641    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:09.227753    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:38:09.228892    8396 node_ready.go:53] node "ha-210800-m02" has status "Ready":"False"
	I0507 18:38:09.717031    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:09.717031    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:09.717031    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:09.717182    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:09.722336    8396 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 18:38:10.225740    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:10.225740    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:10.225740    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:10.225740    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:10.658343    8396 round_trippers.go:574] Response Status: 200 OK in 431 milliseconds
	I0507 18:38:10.713126    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:10.713126    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:10.713126    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:10.713126    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:10.716862    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:38:11.216552    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:11.216552    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:11.216552    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:11.216552    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:11.222895    8396 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0507 18:38:11.723679    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:11.723764    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:11.723764    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:11.723764    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:11.728123    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:38:11.729851    8396 node_ready.go:53] node "ha-210800-m02" has status "Ready":"False"
	I0507 18:38:12.228465    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:12.228465    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:12.228465    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:12.228465    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:12.235375    8396 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0507 18:38:12.727131    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:12.727247    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:12.727247    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:12.727247    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:12.732701    8396 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 18:38:13.216903    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:13.217109    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:13.217109    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:13.217109    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:13.225109    8396 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0507 18:38:13.716032    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:13.716032    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:13.716032    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:13.716032    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:13.721598    8396 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 18:38:14.216775    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:14.216775    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:14.216775    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:14.216775    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:14.229114    8396 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0507 18:38:14.229114    8396 node_ready.go:49] node "ha-210800-m02" has status "Ready":"True"
	I0507 18:38:14.229114    8396 node_ready.go:38] duration metric: took 7.0162034s for node "ha-210800-m02" to be "Ready" ...
	I0507 18:38:14.229705    8396 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0507 18:38:14.229705    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods
	I0507 18:38:14.229833    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:14.229833    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:14.229833    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:14.235019    8396 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 18:38:14.243661    8396 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-cr9nn" in "kube-system" namespace to be "Ready" ...
	I0507 18:38:14.243661    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-cr9nn
	I0507 18:38:14.243661    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:14.243661    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:14.243661    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:14.247500    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:38:14.248492    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800
	I0507 18:38:14.248492    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:14.248492    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:14.248492    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:14.253535    8396 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 18:38:14.254317    8396 pod_ready.go:92] pod "coredns-7db6d8ff4d-cr9nn" in "kube-system" namespace has status "Ready":"True"
	I0507 18:38:14.254317    8396 pod_ready.go:81] duration metric: took 10.6552ms for pod "coredns-7db6d8ff4d-cr9nn" in "kube-system" namespace to be "Ready" ...
	I0507 18:38:14.254317    8396 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-dxsqf" in "kube-system" namespace to be "Ready" ...
	I0507 18:38:14.254409    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-dxsqf
	I0507 18:38:14.254456    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:14.254456    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:14.254483    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:14.258002    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:38:14.259323    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800
	I0507 18:38:14.259323    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:14.259323    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:14.259323    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:14.263619    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:38:14.264441    8396 pod_ready.go:92] pod "coredns-7db6d8ff4d-dxsqf" in "kube-system" namespace has status "Ready":"True"
	I0507 18:38:14.264536    8396 pod_ready.go:81] duration metric: took 10.2192ms for pod "coredns-7db6d8ff4d-dxsqf" in "kube-system" namespace to be "Ready" ...
	I0507 18:38:14.264536    8396 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-210800" in "kube-system" namespace to be "Ready" ...
	I0507 18:38:14.264739    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800
	I0507 18:38:14.264756    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:14.264756    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:14.264756    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:14.266968    8396 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 18:38:14.268584    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800
	I0507 18:38:14.268623    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:14.268623    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:14.268653    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:14.271791    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:38:14.271791    8396 pod_ready.go:92] pod "etcd-ha-210800" in "kube-system" namespace has status "Ready":"True"
	I0507 18:38:14.271791    8396 pod_ready.go:81] duration metric: took 7.2539ms for pod "etcd-ha-210800" in "kube-system" namespace to be "Ready" ...
	I0507 18:38:14.271791    8396 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-210800-m02" in "kube-system" namespace to be "Ready" ...
	I0507 18:38:14.271791    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 18:38:14.271791    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:14.271791    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:14.271791    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:14.276375    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:38:14.276375    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:14.276913    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:14.276976    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:14.276976    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:14.280068    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:38:14.782005    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 18:38:14.782109    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:14.782140    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:14.782140    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:14.786748    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:38:14.787964    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:14.787964    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:14.788047    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:14.788047    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:14.795225    8396 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0507 18:38:15.278633    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 18:38:15.278782    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:15.278782    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:15.278782    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:15.283153    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:38:15.284429    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:15.284429    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:15.284511    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:15.284511    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:15.288511    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:38:15.776848    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 18:38:15.776848    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:15.776848    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:15.776848    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:15.780615    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:38:15.781377    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:15.781377    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:15.781377    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:15.781377    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:15.785968    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:38:16.277424    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 18:38:16.277424    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:16.277424    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:16.277424    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:16.281572    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:38:16.283217    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:16.283217    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:16.283217    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:16.283304    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:16.287516    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:38:16.288179    8396 pod_ready.go:102] pod "etcd-ha-210800-m02" in "kube-system" namespace has status "Ready":"False"
	I0507 18:38:16.773095    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 18:38:16.773202    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:16.773202    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:16.773202    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:16.782408    8396 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0507 18:38:16.783445    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:16.783445    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:16.783445    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:16.783445    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:16.787040    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:38:17.278164    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 18:38:17.278228    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:17.278261    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:17.278261    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:17.282058    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:38:17.283893    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:17.283986    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:17.283986    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:17.283986    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:17.287711    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:38:17.776668    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 18:38:17.776758    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:17.776758    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:17.776837    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:17.785328    8396 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0507 18:38:17.786547    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:17.786547    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:17.786620    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:17.786620    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:17.790057    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:38:18.277169    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 18:38:18.277169    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:18.277169    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:18.277169    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:18.282767    8396 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 18:38:18.283809    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:18.283870    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:18.283870    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:18.283870    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:18.287818    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:38:18.288402    8396 pod_ready.go:102] pod "etcd-ha-210800-m02" in "kube-system" namespace has status "Ready":"False"
	I0507 18:38:18.781392    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 18:38:18.781472    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:18.781472    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:18.781472    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:18.786206    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:38:18.786999    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:18.786999    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:18.787087    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:18.787087    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:18.790937    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:38:19.282473    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 18:38:19.282473    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:19.282473    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:19.282566    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:19.286511    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:38:19.287882    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:19.287882    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:19.287882    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:19.287882    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:19.301568    8396 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0507 18:38:19.772556    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 18:38:19.772625    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:19.772694    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:19.772694    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:19.778977    8396 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0507 18:38:19.779831    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:19.780366    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:19.780366    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:19.780366    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:19.783046    8396 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 18:38:20.282938    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 18:38:20.283005    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:20.283005    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:20.283074    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:20.287537    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:38:20.288513    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:20.288607    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:20.288607    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:20.288607    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:20.292662    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:38:20.293302    8396 pod_ready.go:102] pod "etcd-ha-210800-m02" in "kube-system" namespace has status "Ready":"False"
	I0507 18:38:20.784037    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 18:38:20.784037    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:20.784037    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:20.784037    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:20.789152    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:38:20.790013    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:20.790084    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:20.790084    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:20.790084    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:20.794322    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:38:21.287286    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 18:38:21.287286    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:21.287286    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:21.287286    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:21.292862    8396 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 18:38:21.294871    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:21.294871    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:21.294871    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:21.294871    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:21.306491    8396 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0507 18:38:21.776226    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 18:38:21.776299    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:21.776299    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:21.776299    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:21.781532    8396 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 18:38:21.782163    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:21.782163    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:21.782163    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:21.782163    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:21.786865    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:38:22.283387    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 18:38:22.283387    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:22.283387    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:22.283696    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:22.288416    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:38:22.288416    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:22.288416    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:22.289231    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:22.289231    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:22.293315    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:38:22.293839    8396 pod_ready.go:102] pod "etcd-ha-210800-m02" in "kube-system" namespace has status "Ready":"False"
	I0507 18:38:22.777200    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 18:38:22.777200    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:22.777200    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:22.777200    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:22.781680    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:38:22.782621    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:22.782726    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:22.782726    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:22.782726    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:22.789055    8396 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0507 18:38:22.790670    8396 pod_ready.go:92] pod "etcd-ha-210800-m02" in "kube-system" namespace has status "Ready":"True"
	I0507 18:38:22.790733    8396 pod_ready.go:81] duration metric: took 8.518358s for pod "etcd-ha-210800-m02" in "kube-system" namespace to be "Ready" ...
	I0507 18:38:22.790733    8396 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-210800" in "kube-system" namespace to be "Ready" ...
	I0507 18:38:22.790832    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-210800
	I0507 18:38:22.790832    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:22.790832    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:22.790832    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:22.795694    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:38:22.801928    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800
	I0507 18:38:22.801928    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:22.801928    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:22.801928    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:22.807452    8396 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 18:38:22.808657    8396 pod_ready.go:92] pod "kube-apiserver-ha-210800" in "kube-system" namespace has status "Ready":"True"
	I0507 18:38:22.808657    8396 pod_ready.go:81] duration metric: took 17.9226ms for pod "kube-apiserver-ha-210800" in "kube-system" namespace to be "Ready" ...
	I0507 18:38:22.808657    8396 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-210800-m02" in "kube-system" namespace to be "Ready" ...
	I0507 18:38:22.808657    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-210800-m02
	I0507 18:38:22.808657    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:22.808657    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:22.808657    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:22.813414    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:38:22.813829    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:22.813829    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:22.813829    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:22.813829    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:22.818566    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:38:22.818984    8396 pod_ready.go:92] pod "kube-apiserver-ha-210800-m02" in "kube-system" namespace has status "Ready":"True"
	I0507 18:38:22.819041    8396 pod_ready.go:81] duration metric: took 10.3835ms for pod "kube-apiserver-ha-210800-m02" in "kube-system" namespace to be "Ready" ...
	I0507 18:38:22.819041    8396 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-210800" in "kube-system" namespace to be "Ready" ...
	I0507 18:38:22.819147    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-210800
	I0507 18:38:22.819147    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:22.819215    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:22.819215    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:22.823456    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:38:22.824159    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800
	I0507 18:38:22.824159    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:22.824159    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:22.824210    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:22.832757    8396 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0507 18:38:22.832757    8396 pod_ready.go:92] pod "kube-controller-manager-ha-210800" in "kube-system" namespace has status "Ready":"True"
	I0507 18:38:22.832757    8396 pod_ready.go:81] duration metric: took 13.7152ms for pod "kube-controller-manager-ha-210800" in "kube-system" namespace to be "Ready" ...
	I0507 18:38:22.832757    8396 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-210800-m02" in "kube-system" namespace to be "Ready" ...
	I0507 18:38:22.832757    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-210800-m02
	I0507 18:38:22.832757    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:22.832757    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:22.832757    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:22.855467    8396 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0507 18:38:22.856263    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:22.856263    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:22.856263    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:22.856324    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:22.861130    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:38:22.862529    8396 pod_ready.go:92] pod "kube-controller-manager-ha-210800-m02" in "kube-system" namespace has status "Ready":"True"
	I0507 18:38:22.862529    8396 pod_ready.go:81] duration metric: took 29.7698ms for pod "kube-controller-manager-ha-210800-m02" in "kube-system" namespace to be "Ready" ...
	I0507 18:38:22.862529    8396 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6qdqt" in "kube-system" namespace to be "Ready" ...
	I0507 18:38:22.981644    8396 request.go:629] Waited for 118.7757ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6qdqt
	I0507 18:38:22.981744    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6qdqt
	I0507 18:38:22.981744    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:22.981744    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:22.981828    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:22.987571    8396 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 18:38:23.186839    8396 request.go:629] Waited for 197.9896ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/nodes/ha-210800
	I0507 18:38:23.187078    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800
	I0507 18:38:23.187078    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:23.187078    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:23.187078    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:23.191351    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:38:23.193098    8396 pod_ready.go:92] pod "kube-proxy-6qdqt" in "kube-system" namespace has status "Ready":"True"
	I0507 18:38:23.193223    8396 pod_ready.go:81] duration metric: took 330.5658ms for pod "kube-proxy-6qdqt" in "kube-system" namespace to be "Ready" ...
	I0507 18:38:23.193223    8396 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rshfg" in "kube-system" namespace to be "Ready" ...
	I0507 18:38:23.389300    8396 request.go:629] Waited for 195.9536ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rshfg
	I0507 18:38:23.389300    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rshfg
	I0507 18:38:23.389300    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:23.389724    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:23.389724    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:23.394095    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:38:23.589966    8396 request.go:629] Waited for 195.0566ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:23.589966    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:23.589966    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:23.589966    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:23.589966    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:23.595314    8396 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 18:38:23.597174    8396 pod_ready.go:92] pod "kube-proxy-rshfg" in "kube-system" namespace has status "Ready":"True"
	I0507 18:38:23.597240    8396 pod_ready.go:81] duration metric: took 403.9895ms for pod "kube-proxy-rshfg" in "kube-system" namespace to be "Ready" ...
	I0507 18:38:23.597307    8396 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-210800" in "kube-system" namespace to be "Ready" ...
	I0507 18:38:23.777721    8396 request.go:629] Waited for 180.0324ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-210800
	I0507 18:38:23.778094    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-210800
	I0507 18:38:23.778186    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:23.778186    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:23.778186    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:23.782724    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:38:23.980754    8396 request.go:629] Waited for 197.092ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/nodes/ha-210800
	I0507 18:38:23.981046    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800
	I0507 18:38:23.981046    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:23.981237    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:23.981346    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:23.985610    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:38:23.986989    8396 pod_ready.go:92] pod "kube-scheduler-ha-210800" in "kube-system" namespace has status "Ready":"True"
	I0507 18:38:23.986989    8396 pod_ready.go:81] duration metric: took 389.6559ms for pod "kube-scheduler-ha-210800" in "kube-system" namespace to be "Ready" ...
	I0507 18:38:23.987100    8396 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-210800-m02" in "kube-system" namespace to be "Ready" ...
	I0507 18:38:24.183923    8396 request.go:629] Waited for 196.572ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-210800-m02
	I0507 18:38:24.184210    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-210800-m02
	I0507 18:38:24.184361    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:24.184361    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:24.184361    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:24.188680    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:38:24.387628    8396 request.go:629] Waited for 197.6213ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:24.388070    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:24.388070    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:24.388157    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:24.388157    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:24.391350    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:38:24.392892    8396 pod_ready.go:92] pod "kube-scheduler-ha-210800-m02" in "kube-system" namespace has status "Ready":"True"
	I0507 18:38:24.392892    8396 pod_ready.go:81] duration metric: took 405.7642ms for pod "kube-scheduler-ha-210800-m02" in "kube-system" namespace to be "Ready" ...
	I0507 18:38:24.392956    8396 pod_ready.go:38] duration metric: took 10.1625532s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0507 18:38:24.392956    8396 api_server.go:52] waiting for apiserver process to appear ...
	I0507 18:38:24.401692    8396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0507 18:38:24.426342    8396 api_server.go:72] duration metric: took 17.6152636s to wait for apiserver process to appear ...
	I0507 18:38:24.426342    8396 api_server.go:88] waiting for apiserver healthz status ...
	I0507 18:38:24.426399    8396 api_server.go:253] Checking apiserver healthz at https://172.19.132.69:8443/healthz ...
	I0507 18:38:24.435538    8396 api_server.go:279] https://172.19.132.69:8443/healthz returned 200:
	ok
	I0507 18:38:24.436514    8396 round_trippers.go:463] GET https://172.19.132.69:8443/version
	I0507 18:38:24.436553    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:24.436553    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:24.436553    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:24.437592    8396 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0507 18:38:24.437843    8396 api_server.go:141] control plane version: v1.30.0
	I0507 18:38:24.437843    8396 api_server.go:131] duration metric: took 11.4995ms to wait for apiserver health ...
	I0507 18:38:24.437843    8396 system_pods.go:43] waiting for kube-system pods to appear ...
	I0507 18:38:24.591225    8396 request.go:629] Waited for 153.144ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods
	I0507 18:38:24.591315    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods
	I0507 18:38:24.591315    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:24.591431    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:24.591431    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:24.599025    8396 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0507 18:38:24.606019    8396 system_pods.go:59] 17 kube-system pods found
	I0507 18:38:24.606019    8396 system_pods.go:61] "coredns-7db6d8ff4d-cr9nn" [24c45106-2ef4-4932-ae5d-549fb0177b13] Running
	I0507 18:38:24.606019    8396 system_pods.go:61] "coredns-7db6d8ff4d-dxsqf" [d32c637e-c641-4ef7-b2ed-b6449fe7d50f] Running
	I0507 18:38:24.606019    8396 system_pods.go:61] "etcd-ha-210800" [6888d4a2-b10e-4329-b3de-90fc4bb053f3] Running
	I0507 18:38:24.606019    8396 system_pods.go:61] "etcd-ha-210800-m02" [97f10401-7c02-421d-abe4-2b9f37dd3f39] Running
	I0507 18:38:24.606019    8396 system_pods.go:61] "kindnet-57g8k" [6067a407-ee57-44ab-9591-9217deded72a] Running
	I0507 18:38:24.606019    8396 system_pods.go:61] "kindnet-whrqx" [ded04b26-3100-453a-9c0f-0a7cced93180] Running
	I0507 18:38:24.606019    8396 system_pods.go:61] "kube-apiserver-ha-210800" [74b614eb-d1ef-4707-b1a9-faeb68a9abf4] Running
	I0507 18:38:24.606019    8396 system_pods.go:61] "kube-apiserver-ha-210800-m02" [3399e7eb-50f0-49a6-9dbe-1d5964e62a63] Running
	I0507 18:38:24.606019    8396 system_pods.go:61] "kube-controller-manager-ha-210800" [9d31f6b7-c758-4599-9087-d38a0f929769] Running
	I0507 18:38:24.606019    8396 system_pods.go:61] "kube-controller-manager-ha-210800-m02" [e20ed11b-7d94-407a-a1cb-0440b3b29eb9] Running
	I0507 18:38:24.606019    8396 system_pods.go:61] "kube-proxy-6qdqt" [83aff3e5-b08d-4b7e-8dc2-c2fd1fd9bec7] Running
	I0507 18:38:24.606019    8396 system_pods.go:61] "kube-proxy-rshfg" [2ce7075a-2b4a-4e31-80bf-7de27797a8d6] Running
	I0507 18:38:24.606019    8396 system_pods.go:61] "kube-scheduler-ha-210800" [37fbafc0-eae6-407e-8b45-9c0181aca8dc] Running
	I0507 18:38:24.606019    8396 system_pods.go:61] "kube-scheduler-ha-210800-m02" [51a4f5d3-0f41-4420-87ce-5ac44bb93e3c] Running
	I0507 18:38:24.606019    8396 system_pods.go:61] "kube-vip-ha-210800" [b1216eb2-830b-4756-97c6-a35d5e74c718] Running
	I0507 18:38:24.606019    8396 system_pods.go:61] "kube-vip-ha-210800-m02" [ff2f83aa-9bdb-4dfc-98bf-d632984ef52d] Running
	I0507 18:38:24.606019    8396 system_pods.go:61] "storage-provisioner" [f05f26ec-1ebd-4111-adc5-825fc75a414d] Running
	I0507 18:38:24.606019    8396 system_pods.go:74] duration metric: took 168.1649ms to wait for pod list to return data ...
	I0507 18:38:24.606019    8396 default_sa.go:34] waiting for default service account to be created ...
	I0507 18:38:24.778357    8396 request.go:629] Waited for 172.1018ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/namespaces/default/serviceaccounts
	I0507 18:38:24.778357    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/default/serviceaccounts
	I0507 18:38:24.778357    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:24.778357    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:24.778357    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:24.785539    8396 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0507 18:38:24.785539    8396 default_sa.go:45] found service account: "default"
	I0507 18:38:24.785539    8396 default_sa.go:55] duration metric: took 179.5076ms for default service account to be created ...
	I0507 18:38:24.785539    8396 system_pods.go:116] waiting for k8s-apps to be running ...
	I0507 18:38:24.981805    8396 request.go:629] Waited for 196.2519ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods
	I0507 18:38:24.982397    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods
	I0507 18:38:24.982397    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:24.982506    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:24.982506    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:24.989973    8396 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0507 18:38:24.996462    8396 system_pods.go:86] 17 kube-system pods found
	I0507 18:38:24.996462    8396 system_pods.go:89] "coredns-7db6d8ff4d-cr9nn" [24c45106-2ef4-4932-ae5d-549fb0177b13] Running
	I0507 18:38:24.996462    8396 system_pods.go:89] "coredns-7db6d8ff4d-dxsqf" [d32c637e-c641-4ef7-b2ed-b6449fe7d50f] Running
	I0507 18:38:24.996462    8396 system_pods.go:89] "etcd-ha-210800" [6888d4a2-b10e-4329-b3de-90fc4bb053f3] Running
	I0507 18:38:24.996462    8396 system_pods.go:89] "etcd-ha-210800-m02" [97f10401-7c02-421d-abe4-2b9f37dd3f39] Running
	I0507 18:38:24.996462    8396 system_pods.go:89] "kindnet-57g8k" [6067a407-ee57-44ab-9591-9217deded72a] Running
	I0507 18:38:24.996462    8396 system_pods.go:89] "kindnet-whrqx" [ded04b26-3100-453a-9c0f-0a7cced93180] Running
	I0507 18:38:24.996462    8396 system_pods.go:89] "kube-apiserver-ha-210800" [74b614eb-d1ef-4707-b1a9-faeb68a9abf4] Running
	I0507 18:38:24.996462    8396 system_pods.go:89] "kube-apiserver-ha-210800-m02" [3399e7eb-50f0-49a6-9dbe-1d5964e62a63] Running
	I0507 18:38:24.996462    8396 system_pods.go:89] "kube-controller-manager-ha-210800" [9d31f6b7-c758-4599-9087-d38a0f929769] Running
	I0507 18:38:24.997012    8396 system_pods.go:89] "kube-controller-manager-ha-210800-m02" [e20ed11b-7d94-407a-a1cb-0440b3b29eb9] Running
	I0507 18:38:24.997012    8396 system_pods.go:89] "kube-proxy-6qdqt" [83aff3e5-b08d-4b7e-8dc2-c2fd1fd9bec7] Running
	I0507 18:38:24.997012    8396 system_pods.go:89] "kube-proxy-rshfg" [2ce7075a-2b4a-4e31-80bf-7de27797a8d6] Running
	I0507 18:38:24.997066    8396 system_pods.go:89] "kube-scheduler-ha-210800" [37fbafc0-eae6-407e-8b45-9c0181aca8dc] Running
	I0507 18:38:24.997066    8396 system_pods.go:89] "kube-scheduler-ha-210800-m02" [51a4f5d3-0f41-4420-87ce-5ac44bb93e3c] Running
	I0507 18:38:24.997066    8396 system_pods.go:89] "kube-vip-ha-210800" [b1216eb2-830b-4756-97c6-a35d5e74c718] Running
	I0507 18:38:24.997107    8396 system_pods.go:89] "kube-vip-ha-210800-m02" [ff2f83aa-9bdb-4dfc-98bf-d632984ef52d] Running
	I0507 18:38:24.997107    8396 system_pods.go:89] "storage-provisioner" [f05f26ec-1ebd-4111-adc5-825fc75a414d] Running
	I0507 18:38:24.997107    8396 system_pods.go:126] duration metric: took 211.5531ms to wait for k8s-apps to be running ...
	I0507 18:38:24.997107    8396 system_svc.go:44] waiting for kubelet service to be running ....
	I0507 18:38:25.004087    8396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0507 18:38:25.027846    8396 system_svc.go:56] duration metric: took 30.7369ms WaitForService to wait for kubelet
	I0507 18:38:25.027961    8396 kubeadm.go:576] duration metric: took 18.2167808s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0507 18:38:25.028009    8396 node_conditions.go:102] verifying NodePressure condition ...
	I0507 18:38:25.184161    8396 request.go:629] Waited for 155.8275ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/nodes
	I0507 18:38:25.184517    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes
	I0507 18:38:25.184517    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:25.184624    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:25.184624    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:25.188899    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:38:25.190314    8396 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0507 18:38:25.190314    8396 node_conditions.go:123] node cpu capacity is 2
	I0507 18:38:25.190314    8396 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0507 18:38:25.190314    8396 node_conditions.go:123] node cpu capacity is 2
	I0507 18:38:25.190314    8396 node_conditions.go:105] duration metric: took 162.2931ms to run NodePressure ...
	I0507 18:38:25.190314    8396 start.go:240] waiting for startup goroutines ...
	I0507 18:38:25.190314    8396 start.go:254] writing updated cluster config ...
	I0507 18:38:25.194052    8396 out.go:177] 
	I0507 18:38:25.208158    8396 config.go:182] Loaded profile config "ha-210800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 18:38:25.209156    8396 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\config.json ...
	I0507 18:38:25.213290    8396 out.go:177] * Starting "ha-210800-m03" control-plane node in "ha-210800" cluster
	I0507 18:38:25.220142    8396 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0507 18:38:25.220142    8396 cache.go:56] Caching tarball of preloaded images
	I0507 18:38:25.220142    8396 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0507 18:38:25.220142    8396 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0507 18:38:25.220142    8396 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\config.json ...
	I0507 18:38:25.223408    8396 start.go:360] acquireMachinesLock for ha-210800-m03: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 18:38:25.224205    8396 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-210800-m03"
	I0507 18:38:25.224386    8396 start.go:93] Provisioning new machine with config: &{Name:ha-210800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.0 ClusterName:ha-210800 Namespace:default APIServerHAVIP:172.19.143.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.132.69 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.135.87 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0507 18:38:25.224386    8396 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0507 18:38:25.226689    8396 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0507 18:38:25.227557    8396 start.go:159] libmachine.API.Create for "ha-210800" (driver="hyperv")
	I0507 18:38:25.227619    8396 client.go:168] LocalClient.Create starting
	I0507 18:38:25.227798    8396 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0507 18:38:25.228129    8396 main.go:141] libmachine: Decoding PEM data...
	I0507 18:38:25.228129    8396 main.go:141] libmachine: Parsing certificate...
	I0507 18:38:25.228287    8396 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0507 18:38:25.228418    8396 main.go:141] libmachine: Decoding PEM data...
	I0507 18:38:25.228418    8396 main.go:141] libmachine: Parsing certificate...
	I0507 18:38:25.228418    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0507 18:38:26.923053    8396 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0507 18:38:26.923053    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:38:26.924135    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0507 18:38:28.464544    8396 main.go:141] libmachine: [stdout =====>] : False
	
	I0507 18:38:28.464822    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:38:28.464822    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0507 18:38:29.824471    8396 main.go:141] libmachine: [stdout =====>] : True
	
	I0507 18:38:29.824471    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:38:29.824985    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0507 18:38:33.103075    8396 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0507 18:38:33.103166    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:38:33.104962    8396 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso...
	I0507 18:38:33.402984    8396 main.go:141] libmachine: Creating SSH key...
	I0507 18:38:33.702725    8396 main.go:141] libmachine: Creating VM...
	I0507 18:38:33.702725    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0507 18:38:36.303166    8396 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0507 18:38:36.304021    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:38:36.304100    8396 main.go:141] libmachine: Using switch "Default Switch"
	I0507 18:38:36.304100    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0507 18:38:37.899524    8396 main.go:141] libmachine: [stdout =====>] : True
	
	I0507 18:38:37.899524    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:38:37.899524    8396 main.go:141] libmachine: Creating VHD
	I0507 18:38:37.899524    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0507 18:38:41.384681    8396 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : CE41D955-8D91-4A76-A8C2-269EA17A2698
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0507 18:38:41.385192    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:38:41.385192    8396 main.go:141] libmachine: Writing magic tar header
	I0507 18:38:41.385192    8396 main.go:141] libmachine: Writing SSH key tar header
	I0507 18:38:41.393966    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0507 18:38:44.368608    8396 main.go:141] libmachine: [stdout =====>] : 
	I0507 18:38:44.369328    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:38:44.369328    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800-m03\disk.vhd' -SizeBytes 20000MB
	I0507 18:38:46.683575    8396 main.go:141] libmachine: [stdout =====>] : 
	I0507 18:38:46.683575    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:38:46.683575    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-210800-m03 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0507 18:38:49.959558    8396 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-210800-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0507 18:38:49.959558    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:38:49.959911    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-210800-m03 -DynamicMemoryEnabled $false
	I0507 18:38:51.997094    8396 main.go:141] libmachine: [stdout =====>] : 
	I0507 18:38:51.997444    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:38:51.997600    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-210800-m03 -Count 2
	I0507 18:38:53.983264    8396 main.go:141] libmachine: [stdout =====>] : 
	I0507 18:38:53.983472    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:38:53.983472    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-210800-m03 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800-m03\boot2docker.iso'
	I0507 18:38:56.255555    8396 main.go:141] libmachine: [stdout =====>] : 
	I0507 18:38:56.255709    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:38:56.255709    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-210800-m03 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800-m03\disk.vhd'
	I0507 18:38:58.625911    8396 main.go:141] libmachine: [stdout =====>] : 
	I0507 18:38:58.625911    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:38:58.625911    8396 main.go:141] libmachine: Starting VM...
	I0507 18:38:58.625911    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-210800-m03
	I0507 18:39:01.407243    8396 main.go:141] libmachine: [stdout =====>] : 
	I0507 18:39:01.408246    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:39:01.408301    8396 main.go:141] libmachine: Waiting for host to start...
	I0507 18:39:01.408301    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m03 ).state
	I0507 18:39:03.435590    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:39:03.435590    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:39:03.435679    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m03 ).networkadapters[0]).ipaddresses[0]
	I0507 18:39:05.672622    8396 main.go:141] libmachine: [stdout =====>] : 
	I0507 18:39:05.673363    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:39:06.685664    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m03 ).state
	I0507 18:39:08.653123    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:39:08.653159    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:39:08.653305    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m03 ).networkadapters[0]).ipaddresses[0]
	I0507 18:39:10.914519    8396 main.go:141] libmachine: [stdout =====>] : 
	I0507 18:39:10.914962    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:39:11.922875    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m03 ).state
	I0507 18:39:13.925351    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:39:13.926246    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:39:13.926444    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m03 ).networkadapters[0]).ipaddresses[0]
	I0507 18:39:16.221152    8396 main.go:141] libmachine: [stdout =====>] : 
	I0507 18:39:16.221190    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:39:17.226850    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m03 ).state
	I0507 18:39:19.225396    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:39:19.225396    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:39:19.225396    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m03 ).networkadapters[0]).ipaddresses[0]
	I0507 18:39:21.503921    8396 main.go:141] libmachine: [stdout =====>] : 
	I0507 18:39:21.503921    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:39:22.505063    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m03 ).state
	I0507 18:39:24.502024    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:39:24.502824    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:39:24.502824    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m03 ).networkadapters[0]).ipaddresses[0]
	I0507 18:39:26.883964    8396 main.go:141] libmachine: [stdout =====>] : 172.19.137.224
	
	I0507 18:39:26.883964    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:39:26.884079    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m03 ).state
	I0507 18:39:28.850902    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:39:28.851221    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:39:28.851221    8396 machine.go:94] provisionDockerMachine start ...
	I0507 18:39:28.851221    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m03 ).state
	I0507 18:39:30.772220    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:39:30.772220    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:39:30.772317    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m03 ).networkadapters[0]).ipaddresses[0]
	I0507 18:39:33.054876    8396 main.go:141] libmachine: [stdout =====>] : 172.19.137.224
	
	I0507 18:39:33.055401    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:39:33.059226    8396 main.go:141] libmachine: Using SSH client type: native
	I0507 18:39:33.059914    8396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.137.224 22 <nil> <nil>}
	I0507 18:39:33.059914    8396 main.go:141] libmachine: About to run SSH command:
	hostname
	I0507 18:39:33.194916    8396 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0507 18:39:33.194995    8396 buildroot.go:166] provisioning hostname "ha-210800-m03"
	I0507 18:39:33.194995    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m03 ).state
	I0507 18:39:35.122163    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:39:35.122163    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:39:35.122163    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m03 ).networkadapters[0]).ipaddresses[0]
	I0507 18:39:37.407198    8396 main.go:141] libmachine: [stdout =====>] : 172.19.137.224
	
	I0507 18:39:37.407198    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:39:37.412672    8396 main.go:141] libmachine: Using SSH client type: native
	I0507 18:39:37.413395    8396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.137.224 22 <nil> <nil>}
	I0507 18:39:37.413395    8396 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-210800-m03 && echo "ha-210800-m03" | sudo tee /etc/hostname
	I0507 18:39:37.570948    8396 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-210800-m03
	
	I0507 18:39:37.570948    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m03 ).state
	I0507 18:39:39.474599    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:39:39.474661    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:39:39.475005    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m03 ).networkadapters[0]).ipaddresses[0]
	I0507 18:39:41.756911    8396 main.go:141] libmachine: [stdout =====>] : 172.19.137.224
	
	I0507 18:39:41.757146    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:39:41.760782    8396 main.go:141] libmachine: Using SSH client type: native
	I0507 18:39:41.761310    8396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.137.224 22 <nil> <nil>}
	I0507 18:39:41.761310    8396 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-210800-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-210800-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-210800-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0507 18:39:41.908119    8396 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0507 18:39:41.908119    8396 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0507 18:39:41.908119    8396 buildroot.go:174] setting up certificates
	I0507 18:39:41.908119    8396 provision.go:84] configureAuth start
	I0507 18:39:41.908772    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m03 ).state
	I0507 18:39:43.828143    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:39:43.829277    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:39:43.829277    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m03 ).networkadapters[0]).ipaddresses[0]
	I0507 18:39:46.130130    8396 main.go:141] libmachine: [stdout =====>] : 172.19.137.224
	
	I0507 18:39:46.130212    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:39:46.130212    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m03 ).state
	I0507 18:39:48.046300    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:39:48.046300    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:39:48.046394    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m03 ).networkadapters[0]).ipaddresses[0]
	I0507 18:39:50.364882    8396 main.go:141] libmachine: [stdout =====>] : 172.19.137.224
	
	I0507 18:39:50.364882    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:39:50.364882    8396 provision.go:143] copyHostCerts
	I0507 18:39:50.365540    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0507 18:39:50.365749    8396 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0507 18:39:50.365749    8396 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0507 18:39:50.365749    8396 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0507 18:39:50.366808    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0507 18:39:50.366808    8396 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0507 18:39:50.366808    8396 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0507 18:39:50.366808    8396 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0507 18:39:50.368132    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0507 18:39:50.368132    8396 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0507 18:39:50.368132    8396 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0507 18:39:50.368132    8396 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0507 18:39:50.369282    8396 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-210800-m03 san=[127.0.0.1 172.19.137.224 ha-210800-m03 localhost minikube]
	I0507 18:39:50.528513    8396 provision.go:177] copyRemoteCerts
	I0507 18:39:50.541304    8396 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0507 18:39:50.541304    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m03 ).state
	I0507 18:39:52.470874    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:39:52.470874    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:39:52.470874    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m03 ).networkadapters[0]).ipaddresses[0]
	I0507 18:39:54.751170    8396 main.go:141] libmachine: [stdout =====>] : 172.19.137.224
	
	I0507 18:39:54.751844    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:39:54.752236    8396 sshutil.go:53] new ssh client: &{IP:172.19.137.224 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800-m03\id_rsa Username:docker}
	I0507 18:39:54.856978    8396 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.3152939s)
	I0507 18:39:54.856978    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0507 18:39:54.857455    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0507 18:39:54.899673    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0507 18:39:54.899947    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0507 18:39:54.941904    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0507 18:39:54.942130    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0507 18:39:54.987116    8396 provision.go:87] duration metric: took 13.0781064s to configureAuth
	I0507 18:39:54.987116    8396 buildroot.go:189] setting minikube options for container-runtime
	I0507 18:39:54.987578    8396 config.go:182] Loaded profile config "ha-210800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 18:39:54.987650    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m03 ).state
	I0507 18:39:56.886507    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:39:56.887138    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:39:56.887138    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m03 ).networkadapters[0]).ipaddresses[0]
	I0507 18:39:59.197564    8396 main.go:141] libmachine: [stdout =====>] : 172.19.137.224
	
	I0507 18:39:59.197651    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:39:59.204637    8396 main.go:141] libmachine: Using SSH client type: native
	I0507 18:39:59.204637    8396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.137.224 22 <nil> <nil>}
	I0507 18:39:59.204637    8396 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0507 18:39:59.331642    8396 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0507 18:39:59.331642    8396 buildroot.go:70] root file system type: tmpfs
	I0507 18:39:59.331642    8396 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0507 18:39:59.331642    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m03 ).state
	I0507 18:40:01.265541    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:40:01.265541    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:40:01.265541    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m03 ).networkadapters[0]).ipaddresses[0]
	I0507 18:40:03.666802    8396 main.go:141] libmachine: [stdout =====>] : 172.19.137.224
	
	I0507 18:40:03.666880    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:40:03.673288    8396 main.go:141] libmachine: Using SSH client type: native
	I0507 18:40:03.673845    8396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.137.224 22 <nil> <nil>}
	I0507 18:40:03.673845    8396 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.19.132.69"
	Environment="NO_PROXY=172.19.132.69,172.19.135.87"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0507 18:40:03.830444    8396 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.19.132.69
	Environment=NO_PROXY=172.19.132.69,172.19.135.87
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0507 18:40:03.830444    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m03 ).state
	I0507 18:40:05.804748    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:40:05.805302    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:40:05.805379    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m03 ).networkadapters[0]).ipaddresses[0]
	I0507 18:40:08.158769    8396 main.go:141] libmachine: [stdout =====>] : 172.19.137.224
	
	I0507 18:40:08.158769    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:40:08.163009    8396 main.go:141] libmachine: Using SSH client type: native
	I0507 18:40:08.163621    8396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.137.224 22 <nil> <nil>}
	I0507 18:40:08.163621    8396 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0507 18:40:10.299214    8396 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0507 18:40:10.299214    8396 machine.go:97] duration metric: took 41.4451728s to provisionDockerMachine
	I0507 18:40:10.299214    8396 client.go:171] duration metric: took 1m45.0644265s to LocalClient.Create
	I0507 18:40:10.299214    8396 start.go:167] duration metric: took 1m45.0653573s to libmachine.API.Create "ha-210800"
	I0507 18:40:10.299214    8396 start.go:293] postStartSetup for "ha-210800-m03" (driver="hyperv")
	I0507 18:40:10.299214    8396 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0507 18:40:10.307679    8396 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0507 18:40:10.307679    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m03 ).state
	I0507 18:40:12.284014    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:40:12.284699    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:40:12.284699    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m03 ).networkadapters[0]).ipaddresses[0]
	I0507 18:40:14.599789    8396 main.go:141] libmachine: [stdout =====>] : 172.19.137.224
	
	I0507 18:40:14.599789    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:40:14.600332    8396 sshutil.go:53] new ssh client: &{IP:172.19.137.224 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800-m03\id_rsa Username:docker}
	I0507 18:40:14.711988    8396 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.4039314s)
	I0507 18:40:14.724592    8396 ssh_runner.go:195] Run: cat /etc/os-release
	I0507 18:40:14.733406    8396 info.go:137] Remote host: Buildroot 2023.02.9
	I0507 18:40:14.733406    8396 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0507 18:40:14.733997    8396 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0507 18:40:14.734112    8396 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem -> 99922.pem in /etc/ssl/certs
	I0507 18:40:14.734112    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem -> /etc/ssl/certs/99922.pem
	I0507 18:40:14.743038    8396 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0507 18:40:14.760893    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem --> /etc/ssl/certs/99922.pem (1708 bytes)
	I0507 18:40:14.814797    8396 start.go:296] duration metric: took 4.5152764s for postStartSetup
	I0507 18:40:14.818517    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m03 ).state
	I0507 18:40:16.734414    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:40:16.734414    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:40:16.735033    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m03 ).networkadapters[0]).ipaddresses[0]
	I0507 18:40:19.083500    8396 main.go:141] libmachine: [stdout =====>] : 172.19.137.224
	
	I0507 18:40:19.083579    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:40:19.083579    8396 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\config.json ...
	I0507 18:40:19.085654    8396 start.go:128] duration metric: took 1m53.8535035s to createHost
	I0507 18:40:19.085729    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m03 ).state
	I0507 18:40:21.019148    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:40:21.019148    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:40:21.019148    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m03 ).networkadapters[0]).ipaddresses[0]
	I0507 18:40:23.359162    8396 main.go:141] libmachine: [stdout =====>] : 172.19.137.224
	
	I0507 18:40:23.359162    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:40:23.362701    8396 main.go:141] libmachine: Using SSH client type: native
	I0507 18:40:23.363299    8396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.137.224 22 <nil> <nil>}
	I0507 18:40:23.363299    8396 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0507 18:40:23.499768    8396 main.go:141] libmachine: SSH cmd err, output: <nil>: 1715107223.738113002
	
	I0507 18:40:23.499768    8396 fix.go:216] guest clock: 1715107223.738113002
	I0507 18:40:23.499768    8396 fix.go:229] Guest: 2024-05-07 18:40:23.738113002 +0000 UTC Remote: 2024-05-07 18:40:19.0856542 +0000 UTC m=+518.903648801 (delta=4.652458802s)
	I0507 18:40:23.499768    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m03 ).state
	I0507 18:40:25.439326    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:40:25.440046    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:40:25.440046    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m03 ).networkadapters[0]).ipaddresses[0]
	I0507 18:40:27.774327    8396 main.go:141] libmachine: [stdout =====>] : 172.19.137.224
	
	I0507 18:40:27.774327    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:40:27.779654    8396 main.go:141] libmachine: Using SSH client type: native
	I0507 18:40:27.780034    8396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.137.224 22 <nil> <nil>}
	I0507 18:40:27.780034    8396 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1715107223
	I0507 18:40:27.915035    8396 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue May  7 18:40:23 UTC 2024
	
	I0507 18:40:27.915035    8396 fix.go:236] clock set: Tue May  7 18:40:23 UTC 2024
	 (err=<nil>)
	I0507 18:40:27.915035    8396 start.go:83] releasing machines lock for "ha-210800-m03", held for 2m2.6824314s
	I0507 18:40:27.915573    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m03 ).state
	I0507 18:40:29.815560    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:40:29.815560    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:40:29.816069    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m03 ).networkadapters[0]).ipaddresses[0]
	I0507 18:40:32.120627    8396 main.go:141] libmachine: [stdout =====>] : 172.19.137.224
	
	I0507 18:40:32.120627    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:40:32.124530    8396 out.go:177] * Found network options:
	I0507 18:40:32.143758    8396 out.go:177]   - NO_PROXY=172.19.132.69,172.19.135.87
	W0507 18:40:32.147240    8396 proxy.go:119] fail to check proxy env: Error ip not in block
	W0507 18:40:32.147240    8396 proxy.go:119] fail to check proxy env: Error ip not in block
	I0507 18:40:32.153318    8396 out.go:177]   - NO_PROXY=172.19.132.69,172.19.135.87
	W0507 18:40:32.155453    8396 proxy.go:119] fail to check proxy env: Error ip not in block
	W0507 18:40:32.155453    8396 proxy.go:119] fail to check proxy env: Error ip not in block
	W0507 18:40:32.156193    8396 proxy.go:119] fail to check proxy env: Error ip not in block
	W0507 18:40:32.156193    8396 proxy.go:119] fail to check proxy env: Error ip not in block
	I0507 18:40:32.158869    8396 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0507 18:40:32.158976    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m03 ).state
	I0507 18:40:32.166378    8396 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0507 18:40:32.166378    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m03 ).state
	I0507 18:40:34.167883    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:40:34.167883    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:40:34.167883    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:40:34.168230    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:40:34.168230    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m03 ).networkadapters[0]).ipaddresses[0]
	I0507 18:40:34.168230    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m03 ).networkadapters[0]).ipaddresses[0]
	I0507 18:40:36.594910    8396 main.go:141] libmachine: [stdout =====>] : 172.19.137.224
	
	I0507 18:40:36.594910    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:40:36.595798    8396 sshutil.go:53] new ssh client: &{IP:172.19.137.224 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800-m03\id_rsa Username:docker}
	I0507 18:40:36.620180    8396 main.go:141] libmachine: [stdout =====>] : 172.19.137.224
	
	I0507 18:40:36.620180    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:40:36.620878    8396 sshutil.go:53] new ssh client: &{IP:172.19.137.224 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800-m03\id_rsa Username:docker}
	I0507 18:40:36.689138    8396 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.5224539s)
	W0507 18:40:36.689138    8396 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0507 18:40:36.697177    8396 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0507 18:40:36.760083    8396 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0507 18:40:36.760083    8396 start.go:494] detecting cgroup driver to use...
	I0507 18:40:36.760083    8396 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.6009022s)
	I0507 18:40:36.760083    8396 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0507 18:40:36.812555    8396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0507 18:40:36.840473    8396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0507 18:40:36.861051    8396 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0507 18:40:36.869048    8396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0507 18:40:36.896992    8396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0507 18:40:36.928440    8396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0507 18:40:36.958147    8396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0507 18:40:36.985538    8396 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0507 18:40:37.012025    8396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0507 18:40:37.038205    8396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0507 18:40:37.064841    8396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0507 18:40:37.091488    8396 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0507 18:40:37.120567    8396 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0507 18:40:37.147354    8396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 18:40:37.324397    8396 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0507 18:40:37.354501    8396 start.go:494] detecting cgroup driver to use...
	I0507 18:40:37.364990    8396 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0507 18:40:37.394511    8396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0507 18:40:37.429110    8396 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0507 18:40:37.466721    8396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0507 18:40:37.497293    8396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0507 18:40:37.528447    8396 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0507 18:40:37.586411    8396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0507 18:40:37.608157    8396 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0507 18:40:37.652932    8396 ssh_runner.go:195] Run: which cri-dockerd
	I0507 18:40:37.668377    8396 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0507 18:40:37.684371    8396 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0507 18:40:37.720817    8396 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0507 18:40:37.900446    8396 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0507 18:40:38.072754    8396 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0507 18:40:38.073201    8396 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0507 18:40:38.117691    8396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 18:40:38.291018    8396 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0507 18:40:40.766433    8396 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.4752473s)
	I0507 18:40:40.775189    8396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0507 18:40:40.806689    8396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0507 18:40:40.838252    8396 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0507 18:40:41.032561    8396 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0507 18:40:41.211867    8396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 18:40:41.394761    8396 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0507 18:40:41.433405    8396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0507 18:40:41.465120    8396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 18:40:41.651300    8396 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0507 18:40:41.753835    8396 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0507 18:40:41.762737    8396 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0507 18:40:41.775852    8396 start.go:562] Will wait 60s for crictl version
	I0507 18:40:41.787692    8396 ssh_runner.go:195] Run: which crictl
	I0507 18:40:41.800241    8396 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0507 18:40:41.859953    8396 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0507 18:40:41.866286    8396 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0507 18:40:41.903013    8396 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0507 18:40:41.936560    8396 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0507 18:40:41.939758    8396 out.go:177]   - env NO_PROXY=172.19.132.69
	I0507 18:40:41.943054    8396 out.go:177]   - env NO_PROXY=172.19.132.69,172.19.135.87
	I0507 18:40:41.944759    8396 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0507 18:40:41.949518    8396 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0507 18:40:41.949574    8396 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0507 18:40:41.949574    8396 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0507 18:40:41.949574    8396 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:a3:a5:4f Flags:up|broadcast|multicast|running}
	I0507 18:40:41.952293    8396 ip.go:210] interface addr: fe80::1edb:f5fd:c218:d8d2/64
	I0507 18:40:41.952353    8396 ip.go:210] interface addr: 172.19.128.1/20
	I0507 18:40:41.961209    8396 ssh_runner.go:195] Run: grep 172.19.128.1	host.minikube.internal$ /etc/hosts
	I0507 18:40:41.966378    8396 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.128.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0507 18:40:41.989041    8396 mustload.go:65] Loading cluster: ha-210800
	I0507 18:40:41.989647    8396 config.go:182] Loaded profile config "ha-210800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 18:40:41.990345    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800 ).state
	I0507 18:40:43.927401    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:40:43.927401    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:40:43.927401    8396 host.go:66] Checking if "ha-210800" exists ...
	I0507 18:40:43.927924    8396 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800 for IP: 172.19.137.224
	I0507 18:40:43.927994    8396 certs.go:194] generating shared ca certs ...
	I0507 18:40:43.927994    8396 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 18:40:43.928517    8396 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0507 18:40:43.928594    8396 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0507 18:40:43.928594    8396 certs.go:256] generating profile certs ...
	I0507 18:40:43.929440    8396 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\client.key
	I0507 18:40:43.929440    8396 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.key.b99e8106
	I0507 18:40:43.929440    8396 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.crt.b99e8106 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.19.132.69 172.19.135.87 172.19.137.224 172.19.143.254]
	I0507 18:40:44.148518    8396 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.crt.b99e8106 ...
	I0507 18:40:44.148518    8396 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.crt.b99e8106: {Name:mk7a5e439aeccc02df3bdc8f3a9d3b314f05045d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 18:40:44.148956    8396 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.key.b99e8106 ...
	I0507 18:40:44.148956    8396 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.key.b99e8106: {Name:mk29a150d7d42cd36c6eb069713d060ebd6bf280 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 18:40:44.149877    8396 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.crt.b99e8106 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.crt
	I0507 18:40:44.163176    8396 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.key.b99e8106 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.key
	I0507 18:40:44.164446    8396 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\proxy-client.key
	I0507 18:40:44.164446    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0507 18:40:44.164645    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0507 18:40:44.164744    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0507 18:40:44.164849    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0507 18:40:44.164934    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0507 18:40:44.165118    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0507 18:40:44.165475    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0507 18:40:44.165730    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0507 18:40:44.166254    8396 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\9992.pem (1338 bytes)
	W0507 18:40:44.166570    8396 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\9992_empty.pem, impossibly tiny 0 bytes
	I0507 18:40:44.166761    8396 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0507 18:40:44.166967    8396 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0507 18:40:44.167269    8396 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0507 18:40:44.167429    8396 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0507 18:40:44.167965    8396 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem (1708 bytes)
	I0507 18:40:44.168273    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem -> /usr/share/ca-certificates/99922.pem
	I0507 18:40:44.168374    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0507 18:40:44.168598    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\9992.pem -> /usr/share/ca-certificates/9992.pem
	I0507 18:40:44.168837    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800 ).state
	I0507 18:40:46.106284    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:40:46.106512    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:40:46.106512    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800 ).networkadapters[0]).ipaddresses[0]
	I0507 18:40:48.393140    8396 main.go:141] libmachine: [stdout =====>] : 172.19.132.69
	
	I0507 18:40:48.393775    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:40:48.394056    8396 sshutil.go:53] new ssh client: &{IP:172.19.132.69 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800\id_rsa Username:docker}
	I0507 18:40:48.499894    8396 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0507 18:40:48.507123    8396 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0507 18:40:48.534235    8396 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0507 18:40:48.541633    8396 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0507 18:40:48.570274    8396 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0507 18:40:48.576675    8396 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0507 18:40:48.603825    8396 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0507 18:40:48.610435    8396 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0507 18:40:48.637866    8396 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0507 18:40:48.645537    8396 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0507 18:40:48.676084    8396 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0507 18:40:48.682781    8396 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0507 18:40:48.701521    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0507 18:40:48.751311    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0507 18:40:48.797997    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0507 18:40:48.843024    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0507 18:40:48.886804    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0507 18:40:48.932828    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0507 18:40:48.989081    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0507 18:40:49.033298    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0507 18:40:49.077394    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem --> /usr/share/ca-certificates/99922.pem (1708 bytes)
	I0507 18:40:49.126276    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0507 18:40:49.168189    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\9992.pem --> /usr/share/ca-certificates/9992.pem (1338 bytes)
	I0507 18:40:49.213827    8396 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0507 18:40:49.246065    8396 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0507 18:40:49.274555    8396 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0507 18:40:49.302545    8396 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0507 18:40:49.332108    8396 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0507 18:40:49.363802    8396 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0507 18:40:49.393835    8396 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0507 18:40:49.437185    8396 ssh_runner.go:195] Run: openssl version
	I0507 18:40:49.453938    8396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9992.pem && ln -fs /usr/share/ca-certificates/9992.pem /etc/ssl/certs/9992.pem"
	I0507 18:40:49.480996    8396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9992.pem
	I0507 18:40:49.487292    8396 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  7 18:15 /usr/share/ca-certificates/9992.pem
	I0507 18:40:49.497188    8396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9992.pem
	I0507 18:40:49.512950    8396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9992.pem /etc/ssl/certs/51391683.0"
	I0507 18:40:49.542063    8396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/99922.pem && ln -fs /usr/share/ca-certificates/99922.pem /etc/ssl/certs/99922.pem"
	I0507 18:40:49.568319    8396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/99922.pem
	I0507 18:40:49.575176    8396 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  7 18:15 /usr/share/ca-certificates/99922.pem
	I0507 18:40:49.582056    8396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/99922.pem
	I0507 18:40:49.599563    8396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/99922.pem /etc/ssl/certs/3ec20f2e.0"
	I0507 18:40:49.627992    8396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0507 18:40:49.654907    8396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0507 18:40:49.661863    8396 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  7 18:01 /usr/share/ca-certificates/minikubeCA.pem
	I0507 18:40:49.671999    8396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0507 18:40:49.688416    8396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0507 18:40:49.721777    8396 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0507 18:40:49.729065    8396 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0507 18:40:49.729065    8396 kubeadm.go:928] updating node {m03 172.19.137.224 8443 v1.30.0 docker true true} ...
	I0507 18:40:49.729065    8396 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-210800-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.137.224
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-210800 Namespace:default APIServerHAVIP:172.19.143.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0507 18:40:49.729065    8396 kube-vip.go:111] generating kube-vip config ...
	I0507 18:40:49.736876    8396 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0507 18:40:49.763268    8396 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0507 18:40:49.763361    8396 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.19.143.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0507 18:40:49.773500    8396 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0507 18:40:49.788235    8396 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0507 18:40:49.796814    8396 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0507 18:40:49.817778    8396 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256
	I0507 18:40:49.817778    8396 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256
	I0507 18:40:49.817778    8396 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256
	I0507 18:40:49.817778    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0507 18:40:49.817778    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0507 18:40:49.829549    8396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0507 18:40:49.829549    8396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0507 18:40:49.829549    8396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0507 18:40:49.838655    8396 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0507 18:40:49.838655    8396 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0507 18:40:49.838655    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0507 18:40:49.838655    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0507 18:40:49.885430    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0507 18:40:49.894800    8396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0507 18:40:50.015305    8396 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0507 18:40:50.015420    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0507 18:40:51.132208    8396 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0507 18:40:51.150273    8396 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0507 18:40:51.185092    8396 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0507 18:40:51.221630    8396 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0507 18:40:51.263531    8396 ssh_runner.go:195] Run: grep 172.19.143.254	control-plane.minikube.internal$ /etc/hosts
	I0507 18:40:51.269933    8396 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.143.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0507 18:40:51.301623    8396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 18:40:51.499076    8396 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0507 18:40:51.528080    8396 host.go:66] Checking if "ha-210800" exists ...
	I0507 18:40:51.528504    8396 start.go:316] joinCluster: &{Name:ha-210800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clust
erName:ha-210800 Namespace:default APIServerHAVIP:172.19.143.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.132.69 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.135.87 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.19.137.224 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0507 18:40:51.528504    8396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0507 18:40:51.528504    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800 ).state
	I0507 18:40:53.434168    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:40:53.434168    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:40:53.434554    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800 ).networkadapters[0]).ipaddresses[0]
	I0507 18:40:55.770459    8396 main.go:141] libmachine: [stdout =====>] : 172.19.132.69
	
	I0507 18:40:55.770698    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:40:55.770919    8396 sshutil.go:53] new ssh client: &{IP:172.19.132.69 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800\id_rsa Username:docker}
	I0507 18:40:55.971634    8396 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0": (4.4428301s)
	I0507 18:40:55.971634    8396 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:172.19.137.224 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0507 18:40:55.971758    8396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token flxcjm.6wq1lewpqlhhlihd --discovery-token-ca-cert-hash sha256:931f752ca063cc161db9d00a66e1e235f9a673b9dc0e49228e9ec99d810de7b1 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-210800-m03 --control-plane --apiserver-advertise-address=172.19.137.224 --apiserver-bind-port=8443"
	I0507 18:41:38.061990    8396 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token flxcjm.6wq1lewpqlhhlihd --discovery-token-ca-cert-hash sha256:931f752ca063cc161db9d00a66e1e235f9a673b9dc0e49228e9ec99d810de7b1 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-210800-m03 --control-plane --apiserver-advertise-address=172.19.137.224 --apiserver-bind-port=8443": (42.0873459s)
	I0507 18:41:38.062089    8396 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0507 18:41:38.832012    8396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-210800-m03 minikube.k8s.io/updated_at=2024_05_07T18_41_38_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=a2bee053733709aad5480b65159f65519e411d9f minikube.k8s.io/name=ha-210800 minikube.k8s.io/primary=false
	I0507 18:41:38.989146    8396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-210800-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0507 18:41:39.135248    8396 start.go:318] duration metric: took 47.603536s to joinCluster
	I0507 18:41:39.135411    8396 start.go:234] Will wait 6m0s for node &{Name:m03 IP:172.19.137.224 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0507 18:41:39.138156    8396 out.go:177] * Verifying Kubernetes components...
	I0507 18:41:39.135964    8396 config.go:182] Loaded profile config "ha-210800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 18:41:39.152937    8396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 18:41:39.554495    8396 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0507 18:41:39.589409    8396 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0507 18:41:39.589993    8396 kapi.go:59] client config for ha-210800: &rest.Config{Host:"https://172.19.143.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-210800\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-210800\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2655b00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0507 18:41:39.590112    8396 kubeadm.go:477] Overriding stale ClientConfig host https://172.19.143.254:8443 with https://172.19.132.69:8443
	I0507 18:41:39.590999    8396 node_ready.go:35] waiting up to 6m0s for node "ha-210800-m03" to be "Ready" ...
	I0507 18:41:39.591114    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m03
	I0507 18:41:39.591114    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:39.591114    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:39.591114    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:39.604518    8396 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0507 18:41:40.096282    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m03
	I0507 18:41:40.096282    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:40.096282    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:40.096282    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:40.100870    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:41:40.603147    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m03
	I0507 18:41:40.603147    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:40.603378    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:40.603378    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:40.607983    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:41:41.092775    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m03
	I0507 18:41:41.092775    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:41.092775    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:41.092775    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:41.096966    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:41:41.600332    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m03
	I0507 18:41:41.600332    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:41.600332    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:41.600332    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:41.926619    8396 round_trippers.go:574] Response Status: 200 OK in 326 milliseconds
	I0507 18:41:41.927477    8396 node_ready.go:53] node "ha-210800-m03" has status "Ready":"False"
	I0507 18:41:42.105509    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m03
	I0507 18:41:42.105509    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:42.105509    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:42.105509    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:42.109109    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:41:42.599004    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m03
	I0507 18:41:42.599212    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:42.599212    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:42.599212    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:42.603574    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:41:43.100149    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m03
	I0507 18:41:43.100182    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:43.100182    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:43.100239    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:43.104498    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:41:43.605678    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m03
	I0507 18:41:43.605678    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:43.605678    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:43.605678    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:43.608852    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:41:44.107775    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m03
	I0507 18:41:44.107775    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:44.107775    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:44.107866    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:44.112264    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:41:44.113716    8396 node_ready.go:53] node "ha-210800-m03" has status "Ready":"False"
	I0507 18:41:44.593419    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m03
	I0507 18:41:44.593506    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:44.593506    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:44.593506    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:44.598049    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:41:45.093613    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m03
	I0507 18:41:45.093613    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:45.093613    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:45.093613    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:45.100447    8396 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0507 18:41:45.595856    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m03
	I0507 18:41:45.595856    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:45.595856    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:45.595856    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:45.601132    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:41:46.094597    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m03
	I0507 18:41:46.094597    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:46.094851    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:46.094851    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:46.102663    8396 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0507 18:41:46.594334    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m03
	I0507 18:41:46.594414    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:46.594414    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:46.594488    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:46.599096    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:41:46.601260    8396 node_ready.go:53] node "ha-210800-m03" has status "Ready":"False"
	I0507 18:41:47.105267    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m03
	I0507 18:41:47.105267    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:47.105267    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:47.105267    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:47.112610    8396 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0507 18:41:47.117448    8396 node_ready.go:49] node "ha-210800-m03" has status "Ready":"True"
	I0507 18:41:47.117448    8396 node_ready.go:38] duration metric: took 7.5259064s for node "ha-210800-m03" to be "Ready" ...
	I0507 18:41:47.117448    8396 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0507 18:41:47.117559    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods
	I0507 18:41:47.117559    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:47.117559    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:47.117559    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:47.130014    8396 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0507 18:41:47.139989    8396 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-cr9nn" in "kube-system" namespace to be "Ready" ...
	I0507 18:41:47.139989    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-cr9nn
	I0507 18:41:47.139989    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:47.139989    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:47.139989    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:47.143631    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:41:47.144627    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800
	I0507 18:41:47.144627    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:47.144627    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:47.144627    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:47.147682    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:41:47.149118    8396 pod_ready.go:92] pod "coredns-7db6d8ff4d-cr9nn" in "kube-system" namespace has status "Ready":"True"
	I0507 18:41:47.149181    8396 pod_ready.go:81] duration metric: took 9.1916ms for pod "coredns-7db6d8ff4d-cr9nn" in "kube-system" namespace to be "Ready" ...
	I0507 18:41:47.149181    8396 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-dxsqf" in "kube-system" namespace to be "Ready" ...
	I0507 18:41:47.149273    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-dxsqf
	I0507 18:41:47.149273    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:47.149273    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:47.149273    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:47.152673    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:41:47.154372    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800
	I0507 18:41:47.154446    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:47.154446    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:47.154446    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:47.156687    8396 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 18:41:47.157709    8396 pod_ready.go:92] pod "coredns-7db6d8ff4d-dxsqf" in "kube-system" namespace has status "Ready":"True"
	I0507 18:41:47.157709    8396 pod_ready.go:81] duration metric: took 8.5277ms for pod "coredns-7db6d8ff4d-dxsqf" in "kube-system" namespace to be "Ready" ...
	I0507 18:41:47.157709    8396 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-210800" in "kube-system" namespace to be "Ready" ...
	I0507 18:41:47.157709    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800
	I0507 18:41:47.157709    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:47.157709    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:47.157709    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:47.160924    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:41:47.161992    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800
	I0507 18:41:47.162080    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:47.162080    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:47.162080    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:47.165155    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:41:47.166216    8396 pod_ready.go:92] pod "etcd-ha-210800" in "kube-system" namespace has status "Ready":"True"
	I0507 18:41:47.166216    8396 pod_ready.go:81] duration metric: took 8.5062ms for pod "etcd-ha-210800" in "kube-system" namespace to be "Ready" ...
	I0507 18:41:47.166216    8396 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-210800-m02" in "kube-system" namespace to be "Ready" ...
	I0507 18:41:47.166364    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 18:41:47.166390    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:47.166423    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:47.166423    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:47.169597    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:41:47.170546    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:41:47.170546    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:47.170546    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:47.170546    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:47.173116    8396 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 18:41:47.175021    8396 pod_ready.go:92] pod "etcd-ha-210800-m02" in "kube-system" namespace has status "Ready":"True"
	I0507 18:41:47.175021    8396 pod_ready.go:81] duration metric: took 8.8048ms for pod "etcd-ha-210800-m02" in "kube-system" namespace to be "Ready" ...
	I0507 18:41:47.175021    8396 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-210800-m03" in "kube-system" namespace to be "Ready" ...
	I0507 18:41:47.311361    8396 request.go:629] Waited for 136.0223ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m03
	I0507 18:41:47.311493    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m03
	I0507 18:41:47.311493    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:47.311493    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:47.311493    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:47.316142    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:41:47.514704    8396 request.go:629] Waited for 197.4328ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/nodes/ha-210800-m03
	I0507 18:41:47.514763    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m03
	I0507 18:41:47.514763    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:47.514763    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:47.514763    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:47.533529    8396 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0507 18:41:47.705769    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m03
	I0507 18:41:47.705769    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:47.705769    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:47.705769    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:47.712867    8396 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0507 18:41:47.910114    8396 request.go:629] Waited for 196.522ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/nodes/ha-210800-m03
	I0507 18:41:47.910486    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m03
	I0507 18:41:47.910486    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:47.910486    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:47.910486    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:47.914654    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:41:48.176864    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m03
	I0507 18:41:48.176946    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:48.176946    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:48.176946    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:48.180277    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:41:48.317137    8396 request.go:629] Waited for 134.6006ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/nodes/ha-210800-m03
	I0507 18:41:48.317137    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m03
	I0507 18:41:48.317137    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:48.317137    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:48.317137    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:48.322243    8396 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 18:41:48.677103    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m03
	I0507 18:41:48.677163    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:48.677163    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:48.677163    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:48.682016    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:41:48.707532    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m03
	I0507 18:41:48.707532    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:48.707532    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:48.707532    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:48.712121    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:41:49.189713    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m03
	I0507 18:41:49.189713    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:49.190176    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:49.190248    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:49.197691    8396 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0507 18:41:49.199069    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m03
	I0507 18:41:49.199069    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:49.199069    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:49.199069    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:49.202665    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:41:49.203682    8396 pod_ready.go:102] pod "etcd-ha-210800-m03" in "kube-system" namespace has status "Ready":"False"
	I0507 18:41:49.686023    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m03
	I0507 18:41:49.686023    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:49.686023    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:49.686023    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:49.693907    8396 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0507 18:41:49.695045    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m03
	I0507 18:41:49.695045    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:49.695045    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:49.695045    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:49.698638    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:41:50.186057    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m03
	I0507 18:41:50.186057    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:50.186057    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:50.186057    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:50.190880    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:41:50.192343    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m03
	I0507 18:41:50.192452    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:50.192452    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:50.192452    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:50.195728    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:41:50.687659    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m03
	I0507 18:41:50.687732    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:50.687732    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:50.687732    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:50.691987    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:41:50.693441    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m03
	I0507 18:41:50.693503    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:50.693503    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:50.693503    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:50.697288    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:41:50.698308    8396 pod_ready.go:92] pod "etcd-ha-210800-m03" in "kube-system" namespace has status "Ready":"True"
	I0507 18:41:50.698308    8396 pod_ready.go:81] duration metric: took 3.5230501s for pod "etcd-ha-210800-m03" in "kube-system" namespace to be "Ready" ...
	I0507 18:41:50.698308    8396 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-210800" in "kube-system" namespace to be "Ready" ...
	I0507 18:41:50.698411    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-210800
	I0507 18:41:50.698474    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:50.698474    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:50.698474    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:50.701617    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:41:50.718370    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800
	I0507 18:41:50.718370    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:50.718370    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:50.718370    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:50.721617    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:41:50.723011    8396 pod_ready.go:92] pod "kube-apiserver-ha-210800" in "kube-system" namespace has status "Ready":"True"
	I0507 18:41:50.723011    8396 pod_ready.go:81] duration metric: took 24.7015ms for pod "kube-apiserver-ha-210800" in "kube-system" namespace to be "Ready" ...
	I0507 18:41:50.723011    8396 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-210800-m02" in "kube-system" namespace to be "Ready" ...
	I0507 18:41:50.908454    8396 request.go:629] Waited for 185.4305ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-210800-m02
	I0507 18:41:50.908681    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-210800-m02
	I0507 18:41:50.908780    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:50.908780    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:50.908780    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:50.912536    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:41:51.113412    8396 request.go:629] Waited for 199.3029ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:41:51.113412    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:41:51.113412    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:51.113412    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:51.113775    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:51.123760    8396 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0507 18:41:51.124421    8396 pod_ready.go:92] pod "kube-apiserver-ha-210800-m02" in "kube-system" namespace has status "Ready":"True"
	I0507 18:41:51.124421    8396 pod_ready.go:81] duration metric: took 401.3827ms for pod "kube-apiserver-ha-210800-m02" in "kube-system" namespace to be "Ready" ...
	I0507 18:41:51.124491    8396 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-210800-m03" in "kube-system" namespace to be "Ready" ...
	I0507 18:41:51.316105    8396 request.go:629] Waited for 191.6013ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-210800-m03
	I0507 18:41:51.316432    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-210800-m03
	I0507 18:41:51.316824    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:51.316824    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:51.316824    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:51.322053    8396 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 18:41:51.517655    8396 request.go:629] Waited for 194.4659ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/nodes/ha-210800-m03
	I0507 18:41:51.518066    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m03
	I0507 18:41:51.518066    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:51.518066    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:51.518066    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:51.522328    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:41:51.523456    8396 pod_ready.go:92] pod "kube-apiserver-ha-210800-m03" in "kube-system" namespace has status "Ready":"True"
	I0507 18:41:51.523557    8396 pod_ready.go:81] duration metric: took 399.0393ms for pod "kube-apiserver-ha-210800-m03" in "kube-system" namespace to be "Ready" ...
	I0507 18:41:51.523557    8396 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-210800" in "kube-system" namespace to be "Ready" ...
	I0507 18:41:51.707969    8396 request.go:629] Waited for 183.9757ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-210800
	I0507 18:41:51.708082    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-210800
	I0507 18:41:51.708082    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:51.708082    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:51.708184    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:51.714738    8396 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0507 18:41:51.910804    8396 request.go:629] Waited for 194.7891ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/nodes/ha-210800
	I0507 18:41:51.911125    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800
	I0507 18:41:51.911125    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:51.911125    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:51.911125    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:51.917487    8396 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0507 18:41:51.922939    8396 pod_ready.go:92] pod "kube-controller-manager-ha-210800" in "kube-system" namespace has status "Ready":"True"
	I0507 18:41:51.922939    8396 pod_ready.go:81] duration metric: took 399.3552ms for pod "kube-controller-manager-ha-210800" in "kube-system" namespace to be "Ready" ...
	I0507 18:41:51.922939    8396 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-210800-m02" in "kube-system" namespace to be "Ready" ...
	I0507 18:41:52.114748    8396 request.go:629] Waited for 191.7963ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-210800-m02
	I0507 18:41:52.115066    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-210800-m02
	I0507 18:41:52.115066    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:52.115066    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:52.115133    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:52.119071    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:41:52.316381    8396 request.go:629] Waited for 195.6783ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:41:52.316629    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:41:52.316629    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:52.316629    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:52.316728    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:52.323042    8396 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0507 18:41:52.323988    8396 pod_ready.go:92] pod "kube-controller-manager-ha-210800-m02" in "kube-system" namespace has status "Ready":"True"
	I0507 18:41:52.323988    8396 pod_ready.go:81] duration metric: took 401.0225ms for pod "kube-controller-manager-ha-210800-m02" in "kube-system" namespace to be "Ready" ...
	I0507 18:41:52.323988    8396 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-210800-m03" in "kube-system" namespace to be "Ready" ...
	I0507 18:41:52.520424    8396 request.go:629] Waited for 196.4226ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-210800-m03
	I0507 18:41:52.520424    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-210800-m03
	I0507 18:41:52.520424    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:52.520424    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:52.520424    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:52.523692    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:41:52.710160    8396 request.go:629] Waited for 185.0954ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/nodes/ha-210800-m03
	I0507 18:41:52.710160    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m03
	I0507 18:41:52.710160    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:52.710408    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:52.710408    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:52.718264    8396 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0507 18:41:52.719196    8396 pod_ready.go:92] pod "kube-controller-manager-ha-210800-m03" in "kube-system" namespace has status "Ready":"True"
	I0507 18:41:52.719196    8396 pod_ready.go:81] duration metric: took 395.1817ms for pod "kube-controller-manager-ha-210800-m03" in "kube-system" namespace to be "Ready" ...
	I0507 18:41:52.719196    8396 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6qdqt" in "kube-system" namespace to be "Ready" ...
	I0507 18:41:52.912493    8396 request.go:629] Waited for 193.0358ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6qdqt
	I0507 18:41:52.912626    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6qdqt
	I0507 18:41:52.912708    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:52.912770    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:52.912770    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:52.915821    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:41:53.118300    8396 request.go:629] Waited for 199.2201ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/nodes/ha-210800
	I0507 18:41:53.118648    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800
	I0507 18:41:53.118648    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:53.118648    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:53.118835    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:53.123047    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:41:53.123047    8396 pod_ready.go:92] pod "kube-proxy-6qdqt" in "kube-system" namespace has status "Ready":"True"
	I0507 18:41:53.123047    8396 pod_ready.go:81] duration metric: took 403.8238ms for pod "kube-proxy-6qdqt" in "kube-system" namespace to be "Ready" ...
	I0507 18:41:53.123047    8396 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rshfg" in "kube-system" namespace to be "Ready" ...
	I0507 18:41:53.306187    8396 request.go:629] Waited for 182.1207ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rshfg
	I0507 18:41:53.306381    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rshfg
	I0507 18:41:53.306381    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:53.306381    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:53.306442    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:53.310606    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:41:53.507869    8396 request.go:629] Waited for 196.1929ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:41:53.507869    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:41:53.507869    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:53.507869    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:53.507869    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:53.512291    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:41:53.513491    8396 pod_ready.go:92] pod "kube-proxy-rshfg" in "kube-system" namespace has status "Ready":"True"
	I0507 18:41:53.513491    8396 pod_ready.go:81] duration metric: took 390.4174ms for pod "kube-proxy-rshfg" in "kube-system" namespace to be "Ready" ...
	I0507 18:41:53.513622    8396 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tnxck" in "kube-system" namespace to be "Ready" ...
	I0507 18:41:53.712924    8396 request.go:629] Waited for 199.148ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tnxck
	I0507 18:41:53.712924    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tnxck
	I0507 18:41:53.713138    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:53.713138    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:53.713138    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:53.722160    8396 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0507 18:41:53.916470    8396 request.go:629] Waited for 193.4441ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/nodes/ha-210800-m03
	I0507 18:41:53.916470    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m03
	I0507 18:41:53.916601    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:53.916601    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:53.916769    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:53.930262    8396 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0507 18:41:53.931311    8396 pod_ready.go:92] pod "kube-proxy-tnxck" in "kube-system" namespace has status "Ready":"True"
	I0507 18:41:53.931311    8396 pod_ready.go:81] duration metric: took 417.6616ms for pod "kube-proxy-tnxck" in "kube-system" namespace to be "Ready" ...
	I0507 18:41:53.931311    8396 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-210800" in "kube-system" namespace to be "Ready" ...
	I0507 18:41:54.117498    8396 request.go:629] Waited for 186.0931ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-210800
	I0507 18:41:54.117498    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-210800
	I0507 18:41:54.117498    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:54.117498    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:54.117498    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:54.121150    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:41:54.320080    8396 request.go:629] Waited for 197.7954ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/nodes/ha-210800
	I0507 18:41:54.320649    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800
	I0507 18:41:54.320649    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:54.320649    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:54.320649    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:54.324235    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:41:54.326147    8396 pod_ready.go:92] pod "kube-scheduler-ha-210800" in "kube-system" namespace has status "Ready":"True"
	I0507 18:41:54.326256    8396 pod_ready.go:81] duration metric: took 394.918ms for pod "kube-scheduler-ha-210800" in "kube-system" namespace to be "Ready" ...
	I0507 18:41:54.326256    8396 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-210800-m02" in "kube-system" namespace to be "Ready" ...
	I0507 18:41:54.507519    8396 request.go:629] Waited for 181.0734ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-210800-m02
	I0507 18:41:54.507519    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-210800-m02
	I0507 18:41:54.507736    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:54.507736    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:54.507736    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:54.511796    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:41:54.709113    8396 request.go:629] Waited for 196.0884ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:41:54.709315    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:41:54.709315    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:54.709315    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:54.709315    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:54.713965    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:41:54.715132    8396 pod_ready.go:92] pod "kube-scheduler-ha-210800-m02" in "kube-system" namespace has status "Ready":"True"
	I0507 18:41:54.715193    8396 pod_ready.go:81] duration metric: took 388.8499ms for pod "kube-scheduler-ha-210800-m02" in "kube-system" namespace to be "Ready" ...
	I0507 18:41:54.715193    8396 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-210800-m03" in "kube-system" namespace to be "Ready" ...
	I0507 18:41:54.913415    8396 request.go:629] Waited for 198.1384ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-210800-m03
	I0507 18:41:54.913912    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-210800-m03
	I0507 18:41:54.913997    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:54.913997    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:54.913997    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:54.918031    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:41:55.117008    8396 request.go:629] Waited for 197.3951ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/nodes/ha-210800-m03
	I0507 18:41:55.117212    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m03
	I0507 18:41:55.117212    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:55.117212    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:55.117212    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:55.121870    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:41:55.122810    8396 pod_ready.go:92] pod "kube-scheduler-ha-210800-m03" in "kube-system" namespace has status "Ready":"True"
	I0507 18:41:55.122810    8396 pod_ready.go:81] duration metric: took 407.5901ms for pod "kube-scheduler-ha-210800-m03" in "kube-system" namespace to be "Ready" ...
	I0507 18:41:55.122810    8396 pod_ready.go:38] duration metric: took 8.0047384s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0507 18:41:55.122810    8396 api_server.go:52] waiting for apiserver process to appear ...
	I0507 18:41:55.132388    8396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0507 18:41:55.156016    8396 api_server.go:72] duration metric: took 16.019453s to wait for apiserver process to appear ...
	I0507 18:41:55.156016    8396 api_server.go:88] waiting for apiserver healthz status ...
	I0507 18:41:55.156016    8396 api_server.go:253] Checking apiserver healthz at https://172.19.132.69:8443/healthz ...
	I0507 18:41:55.164759    8396 api_server.go:279] https://172.19.132.69:8443/healthz returned 200:
	ok
	I0507 18:41:55.165066    8396 round_trippers.go:463] GET https://172.19.132.69:8443/version
	I0507 18:41:55.165066    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:55.165066    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:55.165066    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:55.165771    8396 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0507 18:41:55.166814    8396 api_server.go:141] control plane version: v1.30.0
	I0507 18:41:55.166814    8396 api_server.go:131] duration metric: took 10.7972ms to wait for apiserver health ...
	I0507 18:41:55.166814    8396 system_pods.go:43] waiting for kube-system pods to appear ...
	I0507 18:41:55.319395    8396 request.go:629] Waited for 152.4587ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods
	I0507 18:41:55.319588    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods
	I0507 18:41:55.319588    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:55.322301    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:55.322301    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:55.329097    8396 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0507 18:41:55.338462    8396 system_pods.go:59] 24 kube-system pods found
	I0507 18:41:55.338579    8396 system_pods.go:61] "coredns-7db6d8ff4d-cr9nn" [24c45106-2ef4-4932-ae5d-549fb0177b13] Running
	I0507 18:41:55.338579    8396 system_pods.go:61] "coredns-7db6d8ff4d-dxsqf" [d32c637e-c641-4ef7-b2ed-b6449fe7d50f] Running
	I0507 18:41:55.338579    8396 system_pods.go:61] "etcd-ha-210800" [6888d4a2-b10e-4329-b3de-90fc4bb053f3] Running
	I0507 18:41:55.338579    8396 system_pods.go:61] "etcd-ha-210800-m02" [97f10401-7c02-421d-abe4-2b9f37dd3f39] Running
	I0507 18:41:55.338579    8396 system_pods.go:61] "etcd-ha-210800-m03" [5f8c792a-5610-476c-b0b2-3016b3b63926] Running
	I0507 18:41:55.338579    8396 system_pods.go:61] "kindnet-57g8k" [6067a407-ee57-44ab-9591-9217deded72a] Running
	I0507 18:41:55.338579    8396 system_pods.go:61] "kindnet-6xzk7" [313799a0-9188-4c07-817c-e46c98c84eb6] Running
	I0507 18:41:55.338579    8396 system_pods.go:61] "kindnet-whrqx" [ded04b26-3100-453a-9c0f-0a7cced93180] Running
	I0507 18:41:55.338579    8396 system_pods.go:61] "kube-apiserver-ha-210800" [74b614eb-d1ef-4707-b1a9-faeb68a9abf4] Running
	I0507 18:41:55.338579    8396 system_pods.go:61] "kube-apiserver-ha-210800-m02" [3399e7eb-50f0-49a6-9dbe-1d5964e62a63] Running
	I0507 18:41:55.338579    8396 system_pods.go:61] "kube-apiserver-ha-210800-m03" [e3215a44-5844-4caa-abb7-8acd94b221ad] Running
	I0507 18:41:55.338579    8396 system_pods.go:61] "kube-controller-manager-ha-210800" [9d31f6b7-c758-4599-9087-d38a0f929769] Running
	I0507 18:41:55.338579    8396 system_pods.go:61] "kube-controller-manager-ha-210800-m02" [e20ed11b-7d94-407a-a1cb-0440b3b29eb9] Running
	I0507 18:41:55.338579    8396 system_pods.go:61] "kube-controller-manager-ha-210800-m03" [ff82d94b-b3f9-484c-ab24-aa37c6243cf7] Running
	I0507 18:41:55.338579    8396 system_pods.go:61] "kube-proxy-6qdqt" [83aff3e5-b08d-4b7e-8dc2-c2fd1fd9bec7] Running
	I0507 18:41:55.338579    8396 system_pods.go:61] "kube-proxy-rshfg" [2ce7075a-2b4a-4e31-80bf-7de27797a8d6] Running
	I0507 18:41:55.338579    8396 system_pods.go:61] "kube-proxy-tnxck" [8cc3ed39-c2bd-4139-9ff6-1cbc0c210b5f] Running
	I0507 18:41:55.338579    8396 system_pods.go:61] "kube-scheduler-ha-210800" [37fbafc0-eae6-407e-8b45-9c0181aca8dc] Running
	I0507 18:41:55.338579    8396 system_pods.go:61] "kube-scheduler-ha-210800-m02" [51a4f5d3-0f41-4420-87ce-5ac44bb93e3c] Running
	I0507 18:41:55.338579    8396 system_pods.go:61] "kube-scheduler-ha-210800-m03" [b6a0dd6e-e43f-40d1-a56b-841269b3e8a4] Running
	I0507 18:41:55.338579    8396 system_pods.go:61] "kube-vip-ha-210800" [b1216eb2-830b-4756-97c6-a35d5e74c718] Running
	I0507 18:41:55.338579    8396 system_pods.go:61] "kube-vip-ha-210800-m02" [ff2f83aa-9bdb-4dfc-98bf-d632984ef52d] Running
	I0507 18:41:55.338579    8396 system_pods.go:61] "kube-vip-ha-210800-m03" [12dde05a-34a8-4d68-9c37-3c5398b5f146] Running
	I0507 18:41:55.338579    8396 system_pods.go:61] "storage-provisioner" [f05f26ec-1ebd-4111-adc5-825fc75a414d] Running
	I0507 18:41:55.338579    8396 system_pods.go:74] duration metric: took 171.7541ms to wait for pod list to return data ...
	I0507 18:41:55.338579    8396 default_sa.go:34] waiting for default service account to be created ...
	I0507 18:41:55.521253    8396 request.go:629] Waited for 182.6614ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/namespaces/default/serviceaccounts
	I0507 18:41:55.521253    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/default/serviceaccounts
	I0507 18:41:55.521253    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:55.521253    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:55.521253    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:55.525994    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:41:55.526289    8396 default_sa.go:45] found service account: "default"
	I0507 18:41:55.526289    8396 default_sa.go:55] duration metric: took 187.697ms for default service account to be created ...
	I0507 18:41:55.526289    8396 system_pods.go:116] waiting for k8s-apps to be running ...
	I0507 18:41:55.708078    8396 request.go:629] Waited for 181.5099ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods
	I0507 18:41:55.708078    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods
	I0507 18:41:55.708078    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:55.708078    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:55.708078    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:55.723085    8396 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0507 18:41:55.733628    8396 system_pods.go:86] 24 kube-system pods found
	I0507 18:41:55.733628    8396 system_pods.go:89] "coredns-7db6d8ff4d-cr9nn" [24c45106-2ef4-4932-ae5d-549fb0177b13] Running
	I0507 18:41:55.733685    8396 system_pods.go:89] "coredns-7db6d8ff4d-dxsqf" [d32c637e-c641-4ef7-b2ed-b6449fe7d50f] Running
	I0507 18:41:55.733685    8396 system_pods.go:89] "etcd-ha-210800" [6888d4a2-b10e-4329-b3de-90fc4bb053f3] Running
	I0507 18:41:55.733685    8396 system_pods.go:89] "etcd-ha-210800-m02" [97f10401-7c02-421d-abe4-2b9f37dd3f39] Running
	I0507 18:41:55.733685    8396 system_pods.go:89] "etcd-ha-210800-m03" [5f8c792a-5610-476c-b0b2-3016b3b63926] Running
	I0507 18:41:55.733685    8396 system_pods.go:89] "kindnet-57g8k" [6067a407-ee57-44ab-9591-9217deded72a] Running
	I0507 18:41:55.733685    8396 system_pods.go:89] "kindnet-6xzk7" [313799a0-9188-4c07-817c-e46c98c84eb6] Running
	I0507 18:41:55.733685    8396 system_pods.go:89] "kindnet-whrqx" [ded04b26-3100-453a-9c0f-0a7cced93180] Running
	I0507 18:41:55.733685    8396 system_pods.go:89] "kube-apiserver-ha-210800" [74b614eb-d1ef-4707-b1a9-faeb68a9abf4] Running
	I0507 18:41:55.733685    8396 system_pods.go:89] "kube-apiserver-ha-210800-m02" [3399e7eb-50f0-49a6-9dbe-1d5964e62a63] Running
	I0507 18:41:55.733685    8396 system_pods.go:89] "kube-apiserver-ha-210800-m03" [e3215a44-5844-4caa-abb7-8acd94b221ad] Running
	I0507 18:41:55.733685    8396 system_pods.go:89] "kube-controller-manager-ha-210800" [9d31f6b7-c758-4599-9087-d38a0f929769] Running
	I0507 18:41:55.733685    8396 system_pods.go:89] "kube-controller-manager-ha-210800-m02" [e20ed11b-7d94-407a-a1cb-0440b3b29eb9] Running
	I0507 18:41:55.733685    8396 system_pods.go:89] "kube-controller-manager-ha-210800-m03" [ff82d94b-b3f9-484c-ab24-aa37c6243cf7] Running
	I0507 18:41:55.733685    8396 system_pods.go:89] "kube-proxy-6qdqt" [83aff3e5-b08d-4b7e-8dc2-c2fd1fd9bec7] Running
	I0507 18:41:55.733685    8396 system_pods.go:89] "kube-proxy-rshfg" [2ce7075a-2b4a-4e31-80bf-7de27797a8d6] Running
	I0507 18:41:55.733685    8396 system_pods.go:89] "kube-proxy-tnxck" [8cc3ed39-c2bd-4139-9ff6-1cbc0c210b5f] Running
	I0507 18:41:55.733685    8396 system_pods.go:89] "kube-scheduler-ha-210800" [37fbafc0-eae6-407e-8b45-9c0181aca8dc] Running
	I0507 18:41:55.733685    8396 system_pods.go:89] "kube-scheduler-ha-210800-m02" [51a4f5d3-0f41-4420-87ce-5ac44bb93e3c] Running
	I0507 18:41:55.733685    8396 system_pods.go:89] "kube-scheduler-ha-210800-m03" [b6a0dd6e-e43f-40d1-a56b-841269b3e8a4] Running
	I0507 18:41:55.733685    8396 system_pods.go:89] "kube-vip-ha-210800" [b1216eb2-830b-4756-97c6-a35d5e74c718] Running
	I0507 18:41:55.733685    8396 system_pods.go:89] "kube-vip-ha-210800-m02" [ff2f83aa-9bdb-4dfc-98bf-d632984ef52d] Running
	I0507 18:41:55.733685    8396 system_pods.go:89] "kube-vip-ha-210800-m03" [12dde05a-34a8-4d68-9c37-3c5398b5f146] Running
	I0507 18:41:55.733685    8396 system_pods.go:89] "storage-provisioner" [f05f26ec-1ebd-4111-adc5-825fc75a414d] Running
	I0507 18:41:55.733685    8396 system_pods.go:126] duration metric: took 207.3825ms to wait for k8s-apps to be running ...
	I0507 18:41:55.733685    8396 system_svc.go:44] waiting for kubelet service to be running ....
	I0507 18:41:55.741571    8396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0507 18:41:55.765567    8396 system_svc.go:56] duration metric: took 31.8799ms WaitForService to wait for kubelet
	I0507 18:41:55.765567    8396 kubeadm.go:576] duration metric: took 16.6289637s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0507 18:41:55.765894    8396 node_conditions.go:102] verifying NodePressure condition ...
	I0507 18:41:55.911291    8396 request.go:629] Waited for 145.3374ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/nodes
	I0507 18:41:55.911625    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes
	I0507 18:41:55.911819    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:55.911819    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:55.911819    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:55.917283    8396 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 18:41:55.919641    8396 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0507 18:41:55.919699    8396 node_conditions.go:123] node cpu capacity is 2
	I0507 18:41:55.919699    8396 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0507 18:41:55.919757    8396 node_conditions.go:123] node cpu capacity is 2
	I0507 18:41:55.919757    8396 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0507 18:41:55.919757    8396 node_conditions.go:123] node cpu capacity is 2
	I0507 18:41:55.919757    8396 node_conditions.go:105] duration metric: took 153.8531ms to run NodePressure ...
	I0507 18:41:55.919757    8396 start.go:240] waiting for startup goroutines ...
	I0507 18:41:55.919823    8396 start.go:254] writing updated cluster config ...
	I0507 18:41:55.927927    8396 ssh_runner.go:195] Run: rm -f paused
	I0507 18:41:56.049806    8396 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0507 18:41:56.053257    8396 out.go:177] * Done! kubectl is now configured to use "ha-210800" cluster and "default" namespace by default
	
	
	==> Docker <==
	May 07 18:34:57 ha-210800 cri-dockerd[1230]: time="2024-05-07T18:34:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a65cae5cd54a4e12693a16eb950300ee543f746f8615c5f9babee04f662d097b/resolv.conf as [nameserver 172.19.128.1]"
	May 07 18:34:57 ha-210800 cri-dockerd[1230]: time="2024-05-07T18:34:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/9e9fb991e5a5ab8d6b82f2d6177d44fee0cd2cc342debbc9fcf1a8840dec42f7/resolv.conf as [nameserver 172.19.128.1]"
	May 07 18:34:57 ha-210800 cri-dockerd[1230]: time="2024-05-07T18:34:57Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c11b861ad1aebf0cb35aedfa792eb92c484ceb6a3e7dc2a44168cf1c1f6424e1/resolv.conf as [nameserver 172.19.128.1]"
	May 07 18:34:57 ha-210800 dockerd[1330]: time="2024-05-07T18:34:57.761881847Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 07 18:34:57 ha-210800 dockerd[1330]: time="2024-05-07T18:34:57.761953049Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 07 18:34:57 ha-210800 dockerd[1330]: time="2024-05-07T18:34:57.761966849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 07 18:34:57 ha-210800 dockerd[1330]: time="2024-05-07T18:34:57.762306959Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 07 18:34:57 ha-210800 dockerd[1330]: time="2024-05-07T18:34:57.948314245Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 07 18:34:57 ha-210800 dockerd[1330]: time="2024-05-07T18:34:57.948411648Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 07 18:34:57 ha-210800 dockerd[1330]: time="2024-05-07T18:34:57.948429949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 07 18:34:57 ha-210800 dockerd[1330]: time="2024-05-07T18:34:57.948563853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 07 18:34:58 ha-210800 dockerd[1330]: time="2024-05-07T18:34:58.003862041Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 07 18:34:58 ha-210800 dockerd[1330]: time="2024-05-07T18:34:58.003982751Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 07 18:34:58 ha-210800 dockerd[1330]: time="2024-05-07T18:34:58.004038755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 07 18:34:58 ha-210800 dockerd[1330]: time="2024-05-07T18:34:58.004229571Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 07 18:42:30 ha-210800 dockerd[1330]: time="2024-05-07T18:42:30.186419392Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 07 18:42:30 ha-210800 dockerd[1330]: time="2024-05-07T18:42:30.186614715Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 07 18:42:30 ha-210800 dockerd[1330]: time="2024-05-07T18:42:30.186652920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 07 18:42:30 ha-210800 dockerd[1330]: time="2024-05-07T18:42:30.188673163Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 07 18:42:30 ha-210800 cri-dockerd[1230]: time="2024-05-07T18:42:30Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b1d6504330bebeae4eeeff81fa941452b6f3245a3a80aa39f24526d7a0989f57/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	May 07 18:42:31 ha-210800 cri-dockerd[1230]: time="2024-05-07T18:42:31Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	May 07 18:42:31 ha-210800 dockerd[1330]: time="2024-05-07T18:42:31.842486781Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 07 18:42:31 ha-210800 dockerd[1330]: time="2024-05-07T18:42:31.843265536Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 07 18:42:31 ha-210800 dockerd[1330]: time="2024-05-07T18:42:31.843375543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 07 18:42:31 ha-210800 dockerd[1330]: time="2024-05-07T18:42:31.843817375Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f8b94835b1deb       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   59 seconds ago      Running             busybox                   0                   b1d6504330beb       busybox-fc5497c4f-pkgxl
	f09de8b01ca58       cbb01a7bd410d                                                                                         8 minutes ago       Running             coredns                   0                   c11b861ad1aeb       coredns-7db6d8ff4d-cr9nn
	a77f029cbd2de       cbb01a7bd410d                                                                                         8 minutes ago       Running             coredns                   0                   9e9fb991e5a5a       coredns-7db6d8ff4d-dxsqf
	2ac532428458f       6e38f40d628db                                                                                         8 minutes ago       Running             storage-provisioner       0                   a65cae5cd54a4       storage-provisioner
	3dcbef7bd0b66       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              8 minutes ago       Running             kindnet-cni               0                   14b94a1625979       kindnet-whrqx
	b876902be49e2       a0bf559e280cf                                                                                         8 minutes ago       Running             kube-proxy                0                   4313824e7fd6c       kube-proxy-6qdqt
	18ea360a18fd6       ghcr.io/kube-vip/kube-vip@sha256:82698885b3b5f926cd940b7000549f3d43850cb6565a708162900c1475a83016     9 minutes ago       Running             kube-vip                  0                   73b333b99ce9e       kube-vip-ha-210800
	4fc364eaa2527       3861cfcd7c04c                                                                                         9 minutes ago       Running             etcd                      0                   ec0441a1413ba       etcd-ha-210800
	c22f717c4b95d       c42f13656d0b2                                                                                         9 minutes ago       Running             kube-apiserver            0                   818b2dd2ca6f4       kube-apiserver-ha-210800
	74353e51a6877       259c8277fcbbc                                                                                         9 minutes ago       Running             kube-scheduler            0                   bc9c4b58404e6       kube-scheduler-ha-210800
	cf981f1729cd7       c7aad43836fa5                                                                                         9 minutes ago       Running             kube-controller-manager   0                   d326bdf8575cd       kube-controller-manager-ha-210800
	
	
	==> coredns [a77f029cbd2d] <==
	[INFO] 10.244.2.2:53294 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.001342295s
	[INFO] 10.244.2.2:55335 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.087816092s
	[INFO] 10.244.0.4:34617 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000168111s
	[INFO] 10.244.1.2:33633 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000177012s
	[INFO] 10.244.1.2:60462 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000116308s
	[INFO] 10.244.1.2:51078 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.024472724s
	[INFO] 10.244.1.2:54231 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000107308s
	[INFO] 10.244.2.2:33146 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000300921s
	[INFO] 10.244.2.2:58735 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.022066754s
	[INFO] 10.244.2.2:33872 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000063805s
	[INFO] 10.244.0.4:54683 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000231416s
	[INFO] 10.244.0.4:58329 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000112108s
	[INFO] 10.244.0.4:45568 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000201414s
	[INFO] 10.244.0.4:49397 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000238817s
	[INFO] 10.244.0.4:38120 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00013901s
	[INFO] 10.244.0.4:51207 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000160111s
	[INFO] 10.244.1.2:49813 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000163411s
	[INFO] 10.244.1.2:56905 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000116908s
	[INFO] 10.244.2.2:33150 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000074005s
	[INFO] 10.244.2.2:50679 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000055104s
	[INFO] 10.244.0.4:42344 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000199314s
	[INFO] 10.244.0.4:52324 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000130709s
	[INFO] 10.244.2.2:38390 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000097507s
	[INFO] 10.244.0.4:49226 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000101007s
	[INFO] 10.244.0.4:43530 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00027882s
	
	
	==> coredns [f09de8b01ca5] <==
	[INFO] 10.244.1.2:55132 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000168312s
	[INFO] 10.244.1.2:55132 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000128209s
	[INFO] 10.244.1.2:46721 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000094507s
	[INFO] 10.244.2.2:45292 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00013741s
	[INFO] 10.244.2.2:55232 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000115408s
	[INFO] 10.244.2.2:55636 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000198514s
	[INFO] 10.244.2.2:42347 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000316422s
	[INFO] 10.244.2.2:42047 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000067204s
	[INFO] 10.244.0.4:40064 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00014381s
	[INFO] 10.244.0.4:45487 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000071905s
	[INFO] 10.244.1.2:56546 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000171012s
	[INFO] 10.244.1.2:52521 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000191613s
	[INFO] 10.244.2.2:58214 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015391s
	[INFO] 10.244.2.2:36361 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000163311s
	[INFO] 10.244.0.4:35616 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00013741s
	[INFO] 10.244.0.4:50859 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000193914s
	[INFO] 10.244.1.2:36175 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144411s
	[INFO] 10.244.1.2:55812 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000189113s
	[INFO] 10.244.1.2:46867 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000253918s
	[INFO] 10.244.1.2:35616 - 5 "PTR IN 1.128.19.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000099707s
	[INFO] 10.244.2.2:50751 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000231816s
	[INFO] 10.244.2.2:47535 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000065805s
	[INFO] 10.244.2.2:59367 - 5 "PTR IN 1.128.19.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000137909s
	[INFO] 10.244.0.4:41079 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000091106s
	[INFO] 10.244.0.4:42737 - 5 "PTR IN 1.128.19.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000078005s
	
	
	==> describe nodes <==
	Name:               ha-210800
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-210800
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a2bee053733709aad5480b65159f65519e411d9f
	                    minikube.k8s.io/name=ha-210800
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_07T18_34_32_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 07 May 2024 18:34:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-210800
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 07 May 2024 18:43:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 07 May 2024 18:43:11 +0000   Tue, 07 May 2024 18:34:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 07 May 2024 18:43:11 +0000   Tue, 07 May 2024 18:34:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 07 May 2024 18:43:11 +0000   Tue, 07 May 2024 18:34:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 07 May 2024 18:43:11 +0000   Tue, 07 May 2024 18:34:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.132.69
	  Hostname:    ha-210800
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 3762c80c825f49a3ae881c2a62f2f1d9
	  System UUID:                30a5d089-0cbf-a64e-9e54-7723c068114e
	  Boot ID:                    89e3cf68-dc62-4793-b3a7-44a759255eb8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-pkgxl              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         61s
	  kube-system                 coredns-7db6d8ff4d-cr9nn             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m45s
	  kube-system                 coredns-7db6d8ff4d-dxsqf             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     8m45s
	  kube-system                 etcd-ha-210800                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         8m59s
	  kube-system                 kindnet-whrqx                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m46s
	  kube-system                 kube-apiserver-ha-210800             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m59s
	  kube-system                 kube-controller-manager-ha-210800    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m59s
	  kube-system                 kube-proxy-6qdqt                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m46s
	  kube-system                 kube-scheduler-ha-210800             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m59s
	  kube-system                 kube-vip-ha-210800                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m59s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m38s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 8m43s  kube-proxy       
	  Normal  Starting                 8m59s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8m59s  kubelet          Node ha-210800 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8m59s  kubelet          Node ha-210800 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m59s  kubelet          Node ha-210800 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m59s  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           8m46s  node-controller  Node ha-210800 event: Registered Node ha-210800 in Controller
	  Normal  NodeReady                8m34s  kubelet          Node ha-210800 status is now: NodeReady
	  Normal  RegisteredNode           5m8s   node-controller  Node ha-210800 event: Registered Node ha-210800 in Controller
	  Normal  RegisteredNode           97s    node-controller  Node ha-210800 event: Registered Node ha-210800 in Controller
	
	
	Name:               ha-210800-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-210800-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a2bee053733709aad5480b65159f65519e411d9f
	                    minikube.k8s.io/name=ha-210800
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_07T18_38_06_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 07 May 2024 18:38:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-210800-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 07 May 2024 18:43:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 07 May 2024 18:42:38 +0000   Tue, 07 May 2024 18:38:01 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 07 May 2024 18:42:38 +0000   Tue, 07 May 2024 18:38:01 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 07 May 2024 18:42:38 +0000   Tue, 07 May 2024 18:38:01 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 07 May 2024 18:42:38 +0000   Tue, 07 May 2024 18:38:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.135.87
	  Hostname:    ha-210800-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 9af5703e63fb4e0a895ffc20f6471fe2
	  System UUID:                2d5aaff5-a686-984b-8ed1-ccbdc90fbe68
	  Boot ID:                    3e65c516-d05b-44fc-a070-284b6aea479b
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-45d7p                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         61s
	  kube-system                 etcd-ha-210800-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m25s
	  kube-system                 kindnet-57g8k                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m29s
	  kube-system                 kube-apiserver-ha-210800-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m26s
	  kube-system                 kube-controller-manager-ha-210800-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m26s
	  kube-system                 kube-proxy-rshfg                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m29s
	  kube-system                 kube-scheduler-ha-210800-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m25s
	  kube-system                 kube-vip-ha-210800-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m23s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m22s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  5m29s (x8 over 5m29s)  kubelet          Node ha-210800-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m29s (x8 over 5m29s)  kubelet          Node ha-210800-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m29s (x7 over 5m29s)  kubelet          Node ha-210800-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m29s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m26s                  node-controller  Node ha-210800-m02 event: Registered Node ha-210800-m02 in Controller
	  Normal  RegisteredNode           5m8s                   node-controller  Node ha-210800-m02 event: Registered Node ha-210800-m02 in Controller
	  Normal  RegisteredNode           97s                    node-controller  Node ha-210800-m02 event: Registered Node ha-210800-m02 in Controller
	
	
	Name:               ha-210800-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-210800-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a2bee053733709aad5480b65159f65519e411d9f
	                    minikube.k8s.io/name=ha-210800
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_07T18_41_38_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 07 May 2024 18:41:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-210800-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 07 May 2024 18:43:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 07 May 2024 18:42:33 +0000   Tue, 07 May 2024 18:41:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 07 May 2024 18:42:33 +0000   Tue, 07 May 2024 18:41:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 07 May 2024 18:42:33 +0000   Tue, 07 May 2024 18:41:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 07 May 2024 18:42:33 +0000   Tue, 07 May 2024 18:41:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.137.224
	  Hostname:    ha-210800-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 d50db00b4138473185a198a540b0b97e
	  System UUID:                dac136fa-cba9-624b-b4aa-a625b5da5027
	  Boot ID:                    55352b6c-080b-4436-a6af-1832e99644a9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-5z998                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         61s
	  kube-system                 etcd-ha-210800-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         117s
	  kube-system                 kindnet-6xzk7                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      119s
	  kube-system                 kube-apiserver-ha-210800-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         116s
	  kube-system                 kube-controller-manager-ha-210800-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         116s
	  kube-system                 kube-proxy-tnxck                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         119s
	  kube-system                 kube-scheduler-ha-210800-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         115s
	  kube-system                 kube-vip-ha-210800-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         113s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 112s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  119s (x8 over 119s)  kubelet          Node ha-210800-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    119s (x8 over 119s)  kubelet          Node ha-210800-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     119s (x7 over 119s)  kubelet          Node ha-210800-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  119s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           118s                 node-controller  Node ha-210800-m03 event: Registered Node ha-210800-m03 in Controller
	  Normal  RegisteredNode           116s                 node-controller  Node ha-210800-m03 event: Registered Node ha-210800-m03 in Controller
	  Normal  RegisteredNode           97s                  node-controller  Node ha-210800-m03 event: Registered Node ha-210800-m03 in Controller
	
	
	==> dmesg <==
	[  +1.177492] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +7.071372] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[May 7 18:33] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.163170] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[ +28.479803] systemd-fstab-generator[946]: Ignoring "noauto" option for root device
	[  +0.094426] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.505844] systemd-fstab-generator[985]: Ignoring "noauto" option for root device
	[  +0.183097] systemd-fstab-generator[997]: Ignoring "noauto" option for root device
	[  +0.198618] systemd-fstab-generator[1011]: Ignoring "noauto" option for root device
	[May 7 18:34] systemd-fstab-generator[1183]: Ignoring "noauto" option for root device
	[  +0.178318] systemd-fstab-generator[1195]: Ignoring "noauto" option for root device
	[  +0.180207] systemd-fstab-generator[1207]: Ignoring "noauto" option for root device
	[  +0.256957] systemd-fstab-generator[1222]: Ignoring "noauto" option for root device
	[ +11.617075] systemd-fstab-generator[1316]: Ignoring "noauto" option for root device
	[  +0.084567] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.756347] systemd-fstab-generator[1519]: Ignoring "noauto" option for root device
	[  +5.583391] systemd-fstab-generator[1711]: Ignoring "noauto" option for root device
	[  +0.096605] kauditd_printk_skb: 73 callbacks suppressed
	[  +5.603939] kauditd_printk_skb: 67 callbacks suppressed
	[  +2.935224] systemd-fstab-generator[2196]: Ignoring "noauto" option for root device
	[ +15.022371] kauditd_printk_skb: 17 callbacks suppressed
	[  +6.256164] kauditd_printk_skb: 29 callbacks suppressed
	[May 7 18:38] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [4fc364eaa252] <==
	{"level":"info","ts":"2024-05-07T18:41:35.147588Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"8e95dbab746ce898","remote-peer-id":"a3b8ee67399a2a4a"}
	{"level":"info","ts":"2024-05-07T18:41:35.162786Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"8e95dbab746ce898","remote-peer-id":"a3b8ee67399a2a4a"}
	{"level":"info","ts":"2024-05-07T18:41:35.187693Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"8e95dbab746ce898","remote-peer-id":"a3b8ee67399a2a4a"}
	{"level":"info","ts":"2024-05-07T18:41:35.188011Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"8e95dbab746ce898","to":"a3b8ee67399a2a4a","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-05-07T18:41:35.188197Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"8e95dbab746ce898","remote-peer-id":"a3b8ee67399a2a4a"}
	{"level":"info","ts":"2024-05-07T18:41:35.368324Z","caller":"traceutil/trace.go:171","msg":"trace[658922471] linearizableReadLoop","detail":"{readStateIndex:1565; appliedIndex:1565; }","duration":"134.193579ms","start":"2024-05-07T18:41:35.234116Z","end":"2024-05-07T18:41:35.36831Z","steps":["trace[658922471] 'read index received'  (duration: 134.178977ms)","trace[658922471] 'applied index is now lower than readState.Index'  (duration: 13.302µs)"],"step_count":2}
	{"level":"warn","ts":"2024-05-07T18:41:35.380138Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"146.006635ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-ha-210800-m03\" ","response":"range_response_count:1 size:5532"}
	{"level":"info","ts":"2024-05-07T18:41:35.380441Z","caller":"traceutil/trace.go:171","msg":"trace[333662910] range","detail":"{range_begin:/registry/pods/kube-system/etcd-ha-210800-m03; range_end:; response_count:1; response_revision:1410; }","duration":"146.340187ms","start":"2024-05-07T18:41:35.234087Z","end":"2024-05-07T18:41:35.380427Z","steps":["trace[333662910] 'agreement among raft nodes before linearized reading'  (duration: 134.294795ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-07T18:41:35.381922Z","caller":"traceutil/trace.go:171","msg":"trace[685627286] transaction","detail":"{read_only:false; response_revision:1411; number_of_response:1; }","duration":"127.38841ms","start":"2024-05-07T18:41:35.254521Z","end":"2024-05-07T18:41:35.38191Z","steps":["trace[685627286] 'process raft request'  (duration: 125.117653ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-07T18:41:36.083637Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"a3b8ee67399a2a4a","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"warn","ts":"2024-05-07T18:41:37.082552Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"a3b8ee67399a2a4a","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"info","ts":"2024-05-07T18:41:38.091207Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"8e95dbab746ce898 switched to configuration voters=(4654784288144808189 10274359654354839704 11797441351012461130)"}
	{"level":"info","ts":"2024-05-07T18:41:38.091445Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"59eaaf380782080d","local-member-id":"8e95dbab746ce898"}
	{"level":"info","ts":"2024-05-07T18:41:38.091592Z","caller":"etcdserver/server.go:1946","msg":"applied a configuration change through raft","local-member-id":"8e95dbab746ce898","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"a3b8ee67399a2a4a"}
	{"level":"info","ts":"2024-05-07T18:41:42.138455Z","caller":"traceutil/trace.go:171","msg":"trace[1934381450] linearizableReadLoop","detail":"{readStateIndex:1622; appliedIndex:1622; }","duration":"321.463232ms","start":"2024-05-07T18:41:41.816978Z","end":"2024-05-07T18:41:42.138441Z","steps":["trace[1934381450] 'read index received'  (duration: 321.459031ms)","trace[1934381450] 'applied index is now lower than readState.Index'  (duration: 2.801µs)"],"step_count":2}
	{"level":"warn","ts":"2024-05-07T18:41:42.138982Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"322.057122ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-210800-m03\" ","response":"range_response_count:1 size:4142"}
	{"level":"info","ts":"2024-05-07T18:41:42.139091Z","caller":"traceutil/trace.go:171","msg":"trace[1410207336] range","detail":"{range_begin:/registry/minions/ha-210800-m03; range_end:; response_count:1; response_revision:1459; }","duration":"322.18024ms","start":"2024-05-07T18:41:41.816882Z","end":"2024-05-07T18:41:42.139062Z","steps":["trace[1410207336] 'agreement among raft nodes before linearized reading'  (duration: 321.950805ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-07T18:41:42.13922Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-07T18:41:41.816871Z","time spent":"322.339064ms","remote":"127.0.0.1:45812","response type":"/etcdserverpb.KV/Range","request count":0,"request size":33,"response count":1,"response size":4165,"request content":"key:\"/registry/minions/ha-210800-m03\" "}
	{"level":"info","ts":"2024-05-07T18:41:42.139327Z","caller":"traceutil/trace.go:171","msg":"trace[381168874] transaction","detail":"{read_only:false; response_revision:1460; number_of_response:1; }","duration":"340.097668ms","start":"2024-05-07T18:41:41.799155Z","end":"2024-05-07T18:41:42.139253Z","steps":["trace[381168874] 'process raft request'  (duration: 339.838329ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-07T18:41:42.139498Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-07T18:41:41.799093Z","time spent":"340.279895ms","remote":"127.0.0.1:45902","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":525,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-node-lease/ha-210800\" mod_revision:1324 > success:<request_put:<key:\"/registry/leases/kube-node-lease/ha-210800\" value_size:475 >> failure:<request_range:<key:\"/registry/leases/kube-node-lease/ha-210800\" > >"}
	{"level":"warn","ts":"2024-05-07T18:41:42.173152Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"305.842926ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-vip-ha-210800-m03\" ","response":"range_response_count:1 size:3441"}
	{"level":"info","ts":"2024-05-07T18:41:42.173207Z","caller":"traceutil/trace.go:171","msg":"trace[845564176] range","detail":"{range_begin:/registry/pods/kube-system/kube-vip-ha-210800-m03; range_end:; response_count:1; response_revision:1460; }","duration":"305.925938ms","start":"2024-05-07T18:41:41.867268Z","end":"2024-05-07T18:41:42.173194Z","steps":["trace[845564176] 'agreement among raft nodes before linearized reading'  (duration: 274.588792ms)","trace[845564176] 'range keys from in-memory index tree'  (duration: 31.171321ms)"],"step_count":2}
	{"level":"warn","ts":"2024-05-07T18:41:42.173232Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-07T18:41:41.867256Z","time spent":"305.970645ms","remote":"127.0.0.1:45820","response type":"/etcdserverpb.KV/Range","request count":0,"request size":51,"response count":1,"response size":3464,"request content":"key:\"/registry/pods/kube-system/kube-vip-ha-210800-m03\" "}
	{"level":"warn","ts":"2024-05-07T18:41:42.173538Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.970792ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-07T18:41:42.17359Z","caller":"traceutil/trace.go:171","msg":"trace[1425532260] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1460; }","duration":"101.048904ms","start":"2024-05-07T18:41:42.072532Z","end":"2024-05-07T18:41:42.173581Z","steps":["trace[1425532260] 'agreement among raft nodes before linearized reading'  (duration: 69.472122ms)","trace[1425532260] 'range keys from in-memory index tree'  (duration: 31.511872ms)"],"step_count":2}
	
	
	==> kernel <==
	 18:43:30 up 11 min,  0 users,  load average: 0.74, 0.67, 0.38
	Linux ha-210800 5.10.207 #1 SMP Tue Apr 30 22:38:43 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [3dcbef7bd0b6] <==
	I0507 18:42:43.687327       1 main.go:250] Node ha-210800-m03 has CIDR [10.244.2.0/24] 
	I0507 18:42:53.703813       1 main.go:223] Handling node with IPs: map[172.19.132.69:{}]
	I0507 18:42:53.703896       1 main.go:227] handling current node
	I0507 18:42:53.703908       1 main.go:223] Handling node with IPs: map[172.19.135.87:{}]
	I0507 18:42:53.703916       1 main.go:250] Node ha-210800-m02 has CIDR [10.244.1.0/24] 
	I0507 18:42:53.704491       1 main.go:223] Handling node with IPs: map[172.19.137.224:{}]
	I0507 18:42:53.704524       1 main.go:250] Node ha-210800-m03 has CIDR [10.244.2.0/24] 
	I0507 18:43:03.722637       1 main.go:223] Handling node with IPs: map[172.19.132.69:{}]
	I0507 18:43:03.722812       1 main.go:227] handling current node
	I0507 18:43:03.722826       1 main.go:223] Handling node with IPs: map[172.19.135.87:{}]
	I0507 18:43:03.722834       1 main.go:250] Node ha-210800-m02 has CIDR [10.244.1.0/24] 
	I0507 18:43:03.723404       1 main.go:223] Handling node with IPs: map[172.19.137.224:{}]
	I0507 18:43:03.723431       1 main.go:250] Node ha-210800-m03 has CIDR [10.244.2.0/24] 
	I0507 18:43:13.735506       1 main.go:223] Handling node with IPs: map[172.19.132.69:{}]
	I0507 18:43:13.735619       1 main.go:227] handling current node
	I0507 18:43:13.735632       1 main.go:223] Handling node with IPs: map[172.19.135.87:{}]
	I0507 18:43:13.735640       1 main.go:250] Node ha-210800-m02 has CIDR [10.244.1.0/24] 
	I0507 18:43:13.736192       1 main.go:223] Handling node with IPs: map[172.19.137.224:{}]
	I0507 18:43:13.736303       1 main.go:250] Node ha-210800-m03 has CIDR [10.244.2.0/24] 
	I0507 18:43:23.745336       1 main.go:223] Handling node with IPs: map[172.19.132.69:{}]
	I0507 18:43:23.745363       1 main.go:227] handling current node
	I0507 18:43:23.745374       1 main.go:223] Handling node with IPs: map[172.19.135.87:{}]
	I0507 18:43:23.745380       1 main.go:250] Node ha-210800-m02 has CIDR [10.244.1.0/24] 
	I0507 18:43:23.746046       1 main.go:223] Handling node with IPs: map[172.19.137.224:{}]
	I0507 18:43:23.746078       1 main.go:250] Node ha-210800-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [c22f717c4b95] <==
	Trace[1355887999]: [920.476714ms] [920.476714ms] END
	I0507 18:41:27.698295       1 trace.go:236] Trace[2127319967]: "Update" accept:application/json, */*,audit-id:d0f7a828-ca56-4df6-9ab5-c0dc4576852c,client:127.0.0.1,api-group:coordination.k8s.io,api-version:v1,name:plndr-cp-lock,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/plndr-cp-lock,user-agent:kube-vip/v0.0.0 (linux/amd64) kubernetes/$Format,verb:PUT (07-May-2024 18:41:27.136) (total time: 561ms):
	Trace[2127319967]: ["GuaranteedUpdate etcd3" audit-id:d0f7a828-ca56-4df6-9ab5-c0dc4576852c,key:/leases/kube-system/plndr-cp-lock,type:*coordination.Lease,resource:leases.coordination.k8s.io 560ms (18:41:27.137)
	Trace[2127319967]:  ---"Txn call completed" 560ms (18:41:27.698)]
	Trace[2127319967]: [561.785874ms] [561.785874ms] END
	E0507 18:41:32.758240       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0507 18:41:32.758347       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0507 18:41:32.758383       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 8.802µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0507 18:41:32.759891       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0507 18:41:32.760248       1 timeout.go:142] post-timeout activity - time-elapsed: 2.029123ms, PATCH "/api/v1/namespaces/default/events/ha-210800-m03.17cd48feb302260a" result: <nil>
	E0507 18:42:34.860099       1 conn.go:339] Error on socket receive: read tcp 172.19.143.254:8443->172.19.128.1:51288: use of closed network connection
	E0507 18:42:36.442131       1 conn.go:339] Error on socket receive: read tcp 172.19.143.254:8443->172.19.128.1:51290: use of closed network connection
	E0507 18:42:36.904774       1 conn.go:339] Error on socket receive: read tcp 172.19.143.254:8443->172.19.128.1:51292: use of closed network connection
	E0507 18:42:37.414097       1 conn.go:339] Error on socket receive: read tcp 172.19.143.254:8443->172.19.128.1:51294: use of closed network connection
	E0507 18:42:37.869061       1 conn.go:339] Error on socket receive: read tcp 172.19.143.254:8443->172.19.128.1:51296: use of closed network connection
	E0507 18:42:38.286676       1 conn.go:339] Error on socket receive: read tcp 172.19.143.254:8443->172.19.128.1:51298: use of closed network connection
	E0507 18:42:38.697286       1 conn.go:339] Error on socket receive: read tcp 172.19.143.254:8443->172.19.128.1:51300: use of closed network connection
	E0507 18:42:39.121239       1 conn.go:339] Error on socket receive: read tcp 172.19.143.254:8443->172.19.128.1:51302: use of closed network connection
	E0507 18:42:39.567325       1 conn.go:339] Error on socket receive: read tcp 172.19.143.254:8443->172.19.128.1:51304: use of closed network connection
	E0507 18:42:40.315976       1 conn.go:339] Error on socket receive: read tcp 172.19.143.254:8443->172.19.128.1:51307: use of closed network connection
	E0507 18:42:50.736641       1 conn.go:339] Error on socket receive: read tcp 172.19.143.254:8443->172.19.128.1:51309: use of closed network connection
	E0507 18:42:51.150409       1 conn.go:339] Error on socket receive: read tcp 172.19.143.254:8443->172.19.128.1:51314: use of closed network connection
	E0507 18:43:01.578109       1 conn.go:339] Error on socket receive: read tcp 172.19.143.254:8443->172.19.128.1:51316: use of closed network connection
	E0507 18:43:01.994136       1 conn.go:339] Error on socket receive: read tcp 172.19.143.254:8443->172.19.128.1:51318: use of closed network connection
	E0507 18:43:12.416957       1 conn.go:339] Error on socket receive: read tcp 172.19.143.254:8443->172.19.128.1:51320: use of closed network connection
	
	
	==> kube-controller-manager [cf981f1729cd] <==
	I0507 18:34:59.163262       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="21.037211ms"
	I0507 18:34:59.163329       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="38.403µs"
	I0507 18:34:59.209484       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="18.941842ms"
	I0507 18:34:59.209769       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="82.807µs"
	I0507 18:38:01.802239       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-210800-m02\" does not exist"
	I0507 18:38:01.857688       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-210800-m02" podCIDRs=["10.244.1.0/24"]
	I0507 18:38:04.107482       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-210800-m02"
	E0507 18:41:31.837260       1 certificate_controller.go:146] Sync csr-gpjbz failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-gpjbz": the object has been modified; please apply your changes to the latest version and try again
	I0507 18:41:31.909256       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-210800-m03\" does not exist"
	I0507 18:41:31.946346       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-210800-m03" podCIDRs=["10.244.2.0/24"]
	I0507 18:41:34.219021       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-210800-m03"
	I0507 18:42:29.303843       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="115.826284ms"
	I0507 18:42:29.358279       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="54.370864ms"
	I0507 18:42:29.542409       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="183.821592ms"
	I0507 18:42:29.772737       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="230.124183ms"
	I0507 18:42:29.804774       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31.984662ms"
	I0507 18:42:29.805114       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="100.212µs"
	I0507 18:42:31.350995       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="139.11µs"
	I0507 18:42:31.755560       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.703µs"
	I0507 18:42:31.902466       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="222.415µs"
	I0507 18:42:32.393154       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="23.043827ms"
	I0507 18:42:32.393309       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="52.004µs"
	I0507 18:42:32.542787       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="63.518386ms"
	I0507 18:42:32.564643       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="21.796839ms"
	I0507 18:42:32.564779       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="83.606µs"
	
	
	==> kube-proxy [b876902be49e] <==
	I0507 18:34:46.534436       1 server_linux.go:69] "Using iptables proxy"
	I0507 18:34:46.610982       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.19.132.69"]
	I0507 18:34:46.662572       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0507 18:34:46.662679       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0507 18:34:46.662726       1 server_linux.go:165] "Using iptables Proxier"
	I0507 18:34:46.666466       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0507 18:34:46.667450       1 server.go:872] "Version info" version="v1.30.0"
	I0507 18:34:46.667762       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0507 18:34:46.670202       1 config.go:192] "Starting service config controller"
	I0507 18:34:46.670945       1 config.go:101] "Starting endpoint slice config controller"
	I0507 18:34:46.671296       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0507 18:34:46.672219       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0507 18:34:46.675362       1 config.go:319] "Starting node config controller"
	I0507 18:34:46.676170       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0507 18:34:46.773861       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0507 18:34:46.773924       1 shared_informer.go:320] Caches are synced for service config
	I0507 18:34:46.776515       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [74353e51a687] <==
	W0507 18:34:29.049474       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0507 18:34:29.049571       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0507 18:34:29.050508       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0507 18:34:29.050600       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0507 18:34:29.130373       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0507 18:34:29.130560       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0507 18:34:29.194294       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0507 18:34:29.194322       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0507 18:34:29.201175       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0507 18:34:29.201212       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0507 18:34:29.382625       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0507 18:34:29.382869       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0507 18:34:29.386321       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0507 18:34:29.386358       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0507 18:34:29.451226       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0507 18:34:29.452168       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0507 18:34:29.452040       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0507 18:34:29.452756       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0507 18:34:29.462385       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0507 18:34:29.463415       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0507 18:34:31.638259       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0507 18:42:29.271983       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-45d7p\": pod busybox-fc5497c4f-45d7p is already assigned to node \"ha-210800-m02\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-45d7p" node="ha-210800-m02"
	E0507 18:42:29.272368       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod c4b0b74b-2782-4a8c-9ccb-822e2beb946e(default/busybox-fc5497c4f-45d7p) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-45d7p"
	E0507 18:42:29.272726       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-45d7p\": pod busybox-fc5497c4f-45d7p is already assigned to node \"ha-210800-m02\"" pod="default/busybox-fc5497c4f-45d7p"
	I0507 18:42:29.272925       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-45d7p" node="ha-210800-m02"
	
	
	==> kubelet <==
	May 07 18:38:31 ha-210800 kubelet[2203]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 07 18:39:31 ha-210800 kubelet[2203]: E0507 18:39:31.361396    2203 iptables.go:577] "Could not set up iptables canary" err=<
	May 07 18:39:31 ha-210800 kubelet[2203]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 07 18:39:31 ha-210800 kubelet[2203]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 07 18:39:31 ha-210800 kubelet[2203]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 07 18:39:31 ha-210800 kubelet[2203]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 07 18:40:31 ha-210800 kubelet[2203]: E0507 18:40:31.356278    2203 iptables.go:577] "Could not set up iptables canary" err=<
	May 07 18:40:31 ha-210800 kubelet[2203]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 07 18:40:31 ha-210800 kubelet[2203]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 07 18:40:31 ha-210800 kubelet[2203]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 07 18:40:31 ha-210800 kubelet[2203]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 07 18:41:31 ha-210800 kubelet[2203]: E0507 18:41:31.356418    2203 iptables.go:577] "Could not set up iptables canary" err=<
	May 07 18:41:31 ha-210800 kubelet[2203]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 07 18:41:31 ha-210800 kubelet[2203]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 07 18:41:31 ha-210800 kubelet[2203]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 07 18:41:31 ha-210800 kubelet[2203]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 07 18:42:29 ha-210800 kubelet[2203]: I0507 18:42:29.315877    2203 topology_manager.go:215] "Topology Admit Handler" podUID="0b88711b-0c0a-4835-9298-ac03e22c2e84" podNamespace="default" podName="busybox-fc5497c4f-pkgxl"
	May 07 18:42:29 ha-210800 kubelet[2203]: I0507 18:42:29.497363    2203 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgxqn\" (UniqueName: \"kubernetes.io/projected/0b88711b-0c0a-4835-9298-ac03e22c2e84-kube-api-access-fgxqn\") pod \"busybox-fc5497c4f-pkgxl\" (UID: \"0b88711b-0c0a-4835-9298-ac03e22c2e84\") " pod="default/busybox-fc5497c4f-pkgxl"
	May 07 18:42:30 ha-210800 kubelet[2203]: I0507 18:42:30.392428    2203 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1d6504330bebeae4eeeff81fa941452b6f3245a3a80aa39f24526d7a0989f57"
	May 07 18:42:31 ha-210800 kubelet[2203]: E0507 18:42:31.369781    2203 iptables.go:577] "Could not set up iptables canary" err=<
	May 07 18:42:31 ha-210800 kubelet[2203]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 07 18:42:31 ha-210800 kubelet[2203]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 07 18:42:31 ha-210800 kubelet[2203]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 07 18:42:31 ha-210800 kubelet[2203]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 07 18:42:32 ha-210800 kubelet[2203]: I0507 18:42:32.478455    2203 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox-fc5497c4f-pkgxl" podStartSLOduration=2.385117752 podStartE2EDuration="3.478430895s" podCreationTimestamp="2024-05-07 18:42:29 +0000 UTC" firstStartedPulling="2024-05-07 18:42:30.446548857 +0000 UTC m=+479.303962702" lastFinishedPulling="2024-05-07 18:42:31.539862 +0000 UTC m=+480.397275845" observedRunningTime="2024-05-07 18:42:32.477452126 +0000 UTC m=+481.334865971" watchObservedRunningTime="2024-05-07 18:42:32.478430895 +0000 UTC m=+481.335844740"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0507 18:43:23.196746    9424 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-210800 -n ha-210800
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-210800 -n ha-210800: (10.9329138s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-210800 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (63.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (257.13s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-210800 node start m02 -v=7 --alsologtostderr
E0507 19:00:01.213046    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-527400\client.crt: The system cannot find the path specified.
E0507 19:00:06.419221    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\client.crt: The system cannot find the path specified.
E0507 19:00:23.217617    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\client.crt: The system cannot find the path specified.
ha_test.go:420: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-210800 node start m02 -v=7 --alsologtostderr: exit status 1 (2m57.0957525s)

                                                
                                                
-- stdout --
	* Starting "ha-210800-m02" control-plane node in "ha-210800" cluster
	* Restarting existing hyperv VM for "ha-210800-m02" ...
	* Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	* Verifying Kubernetes components...
	* Enabled addons: 

                                                
                                                
-- /stdout --
** stderr ** 
	W0507 18:58:43.242452    7688 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0507 18:58:43.300495    7688 out.go:291] Setting OutFile to fd 636 ...
	I0507 18:58:43.315061    7688 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 18:58:43.315061    7688 out.go:304] Setting ErrFile to fd 724...
	I0507 18:58:43.315263    7688 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 18:58:43.328287    7688 mustload.go:65] Loading cluster: ha-210800
	I0507 18:58:43.329358    7688 config.go:182] Loaded profile config "ha-210800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 18:58:43.329764    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
	I0507 18:58:45.236007    7688 main.go:141] libmachine: [stdout =====>] : Off
	
	I0507 18:58:45.236103    7688 main.go:141] libmachine: [stderr =====>] : 
	W0507 18:58:45.236190    7688 host.go:58] "ha-210800-m02" host status: Stopped
	I0507 18:58:45.239198    7688 out.go:177] * Starting "ha-210800-m02" control-plane node in "ha-210800" cluster
	I0507 18:58:45.241405    7688 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0507 18:58:45.241405    7688 preload.go:147] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0507 18:58:45.241405    7688 cache.go:56] Caching tarball of preloaded images
	I0507 18:58:45.242076    7688 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0507 18:58:45.242076    7688 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0507 18:58:45.242607    7688 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\config.json ...
	I0507 18:58:45.243369    7688 start.go:360] acquireMachinesLock for ha-210800-m02: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 18:58:45.244308    7688 start.go:364] duration metric: took 938.6µs to acquireMachinesLock for "ha-210800-m02"
	I0507 18:58:45.244308    7688 start.go:96] Skipping create...Using existing machine configuration
	I0507 18:58:45.244308    7688 fix.go:54] fixHost starting: m02
	I0507 18:58:45.244308    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
	I0507 18:58:47.157733    7688 main.go:141] libmachine: [stdout =====>] : Off
	
	I0507 18:58:47.157733    7688 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:58:47.157733    7688 fix.go:112] recreateIfNeeded on ha-210800-m02: state=Stopped err=<nil>
	W0507 18:58:47.157820    7688 fix.go:138] unexpected machine state, will restart: <nil>
	I0507 18:58:47.163622    7688 out.go:177] * Restarting existing hyperv VM for "ha-210800-m02" ...
	I0507 18:58:47.166279    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-210800-m02
	I0507 18:58:49.992929    7688 main.go:141] libmachine: [stdout =====>] : 
	I0507 18:58:49.993422    7688 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:58:49.993422    7688 main.go:141] libmachine: Waiting for host to start...
	I0507 18:58:49.993485    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
	I0507 18:58:52.036231    7688 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:58:52.036295    7688 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:58:52.036366    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 18:58:54.325674    7688 main.go:141] libmachine: [stdout =====>] : 
	I0507 18:58:54.325674    7688 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:58:55.334070    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
	I0507 18:58:57.302999    7688 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:58:57.302999    7688 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:58:57.302999    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 18:58:59.556012    7688 main.go:141] libmachine: [stdout =====>] : 
	I0507 18:58:59.556291    7688 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:59:00.561618    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
	I0507 18:59:02.530615    7688 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:59:02.531164    7688 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:59:02.531164    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 18:59:04.754319    7688 main.go:141] libmachine: [stdout =====>] : 
	I0507 18:59:04.754319    7688 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:59:05.765393    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
	I0507 18:59:07.733688    7688 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:59:07.733688    7688 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:59:07.733688    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 18:59:09.973623    7688 main.go:141] libmachine: [stdout =====>] : 
	I0507 18:59:09.974638    7688 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:59:10.985996    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
	I0507 18:59:12.970306    7688 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:59:12.970306    7688 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:59:12.970306    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 18:59:15.299814    7688 main.go:141] libmachine: [stdout =====>] : 172.19.143.44
	
	I0507 18:59:15.299814    7688 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:59:15.301720    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
	I0507 18:59:17.222482    7688 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:59:17.222482    7688 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:59:17.223671    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 18:59:19.491643    7688 main.go:141] libmachine: [stdout =====>] : 172.19.143.44
	
	I0507 18:59:19.491643    7688 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:59:19.492398    7688 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\config.json ...
	I0507 18:59:19.494127    7688 machine.go:94] provisionDockerMachine start ...
	I0507 18:59:19.494269    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
	I0507 18:59:21.414192    7688 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:59:21.414192    7688 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:59:21.414192    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 18:59:23.695912    7688 main.go:141] libmachine: [stdout =====>] : 172.19.143.44
	
	I0507 18:59:23.695912    7688 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:59:23.702210    7688 main.go:141] libmachine: Using SSH client type: native
	I0507 18:59:23.702850    7688 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.143.44 22 <nil> <nil>}
	I0507 18:59:23.702850    7688 main.go:141] libmachine: About to run SSH command:
	hostname
	I0507 18:59:23.829150    7688 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0507 18:59:23.829715    7688 buildroot.go:166] provisioning hostname "ha-210800-m02"
	I0507 18:59:23.829715    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
	I0507 18:59:25.791932    7688 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:59:25.791932    7688 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:59:25.791932    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 18:59:28.050592    7688 main.go:141] libmachine: [stdout =====>] : 172.19.143.44
	
	I0507 18:59:28.050592    7688 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:59:28.055456    7688 main.go:141] libmachine: Using SSH client type: native
	I0507 18:59:28.055861    7688 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.143.44 22 <nil> <nil>}
	I0507 18:59:28.055912    7688 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-210800-m02 && echo "ha-210800-m02" | sudo tee /etc/hostname
	I0507 18:59:28.218339    7688 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-210800-m02
	
	I0507 18:59:28.218339    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
	I0507 18:59:30.103756    7688 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:59:30.104552    7688 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:59:30.104627    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 18:59:32.371717    7688 main.go:141] libmachine: [stdout =====>] : 172.19.143.44
	
	I0507 18:59:32.371855    7688 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:59:32.374711    7688 main.go:141] libmachine: Using SSH client type: native
	I0507 18:59:32.375331    7688 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.143.44 22 <nil> <nil>}
	I0507 18:59:32.375331    7688 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-210800-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-210800-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-210800-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0507 18:59:32.527454    7688 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0507 18:59:32.527565    7688 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0507 18:59:32.527663    7688 buildroot.go:174] setting up certificates
	I0507 18:59:32.527663    7688 provision.go:84] configureAuth start
	I0507 18:59:32.527754    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
	I0507 18:59:34.427095    7688 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:59:34.427095    7688 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:59:34.427864    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 18:59:36.695660    7688 main.go:141] libmachine: [stdout =====>] : 172.19.143.44
	
	I0507 18:59:36.695660    7688 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:59:36.696497    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
	I0507 18:59:38.608011    7688 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:59:38.608535    7688 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:59:38.608637    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 18:59:40.919749    7688 main.go:141] libmachine: [stdout =====>] : 172.19.143.44
	
	I0507 18:59:40.919749    7688 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:59:40.919808    7688 provision.go:143] copyHostCerts
	I0507 18:59:40.919967    7688 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0507 18:59:40.920233    7688 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0507 18:59:40.920233    7688 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0507 18:59:40.920335    7688 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0507 18:59:40.920924    7688 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0507 18:59:40.921562    7688 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0507 18:59:40.921562    7688 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0507 18:59:40.921901    7688 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0507 18:59:40.922835    7688 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0507 18:59:40.922835    7688 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0507 18:59:40.922835    7688 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0507 18:59:40.923359    7688 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0507 18:59:40.924057    7688 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-210800-m02 san=[127.0.0.1 172.19.143.44 ha-210800-m02 localhost minikube]
	I0507 18:59:41.109396    7688 provision.go:177] copyRemoteCerts
	I0507 18:59:41.117780    7688 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0507 18:59:41.117780    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
	I0507 18:59:43.052040    7688 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:59:43.052040    7688 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:59:43.052121    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 18:59:45.390195    7688 main.go:141] libmachine: [stdout =====>] : 172.19.143.44
	
	I0507 18:59:45.390195    7688 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:59:45.390195    7688 sshutil.go:53] new ssh client: &{IP:172.19.143.44 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800-m02\id_rsa Username:docker}
	I0507 18:59:45.495889    7688 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.3778172s)
	I0507 18:59:45.495889    7688 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0507 18:59:45.496889    7688 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0507 18:59:45.540421    7688 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0507 18:59:45.541048    7688 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0507 18:59:45.593154    7688 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0507 18:59:45.593154    7688 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0507 18:59:45.640799    7688 provision.go:87] duration metric: took 13.1122604s to configureAuth
	I0507 18:59:45.640799    7688 buildroot.go:189] setting minikube options for container-runtime
	I0507 18:59:45.641404    7688 config.go:182] Loaded profile config "ha-210800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 18:59:45.641404    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
	I0507 18:59:47.535673    7688 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:59:47.535673    7688 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:59:47.536728    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 18:59:49.808622    7688 main.go:141] libmachine: [stdout =====>] : 172.19.143.44
	
	I0507 18:59:49.809244    7688 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:59:49.815010    7688 main.go:141] libmachine: Using SSH client type: native
	I0507 18:59:49.815010    7688 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.143.44 22 <nil> <nil>}
	I0507 18:59:49.815010    7688 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0507 18:59:49.953893    7688 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0507 18:59:49.953893    7688 buildroot.go:70] root file system type: tmpfs
	I0507 18:59:49.953893    7688 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0507 18:59:49.953893    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
	I0507 18:59:51.896710    7688 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:59:51.897619    7688 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:59:51.897713    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 18:59:54.170505    7688 main.go:141] libmachine: [stdout =====>] : 172.19.143.44
	
	I0507 18:59:54.171346    7688 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:59:54.175218    7688 main.go:141] libmachine: Using SSH client type: native
	I0507 18:59:54.175525    7688 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.143.44 22 <nil> <nil>}
	I0507 18:59:54.175525    7688 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0507 18:59:54.338904    7688 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0507 18:59:54.338904    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
	I0507 18:59:56.296803    7688 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:59:56.296803    7688 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:59:56.296889    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 18:59:58.558613    7688 main.go:141] libmachine: [stdout =====>] : 172.19.143.44
	
	I0507 18:59:58.558687    7688 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:59:58.562112    7688 main.go:141] libmachine: Using SSH client type: native
	I0507 18:59:58.562112    7688 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.143.44 22 <nil> <nil>}
	I0507 18:59:58.562112    7688 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0507 19:00:00.968325    7688 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0507 19:00:00.968390    7688 machine.go:97] duration metric: took 41.4714388s to provisionDockerMachine
	I0507 19:00:00.968460    7688 start.go:293] postStartSetup for "ha-210800-m02" (driver="hyperv")
	I0507 19:00:00.968460    7688 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0507 19:00:00.976963    7688 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0507 19:00:00.976963    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
	I0507 19:00:02.959576    7688 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:00:02.959658    7688 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:00:02.959749    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 19:00:05.366834    7688 main.go:141] libmachine: [stdout =====>] : 172.19.143.44
	
	I0507 19:00:05.366834    7688 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:00:05.366977    7688 sshutil.go:53] new ssh client: &{IP:172.19.143.44 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800-m02\id_rsa Username:docker}
	I0507 19:00:05.481562    7688 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5042426s)
	I0507 19:00:05.490430    7688 ssh_runner.go:195] Run: cat /etc/os-release
	I0507 19:00:05.497336    7688 info.go:137] Remote host: Buildroot 2023.02.9
	I0507 19:00:05.497336    7688 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0507 19:00:05.497412    7688 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0507 19:00:05.498343    7688 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem -> 99922.pem in /etc/ssl/certs
	I0507 19:00:05.498343    7688 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem -> /etc/ssl/certs/99922.pem
	I0507 19:00:05.507188    7688 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0507 19:00:05.526307    7688 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem --> /etc/ssl/certs/99922.pem (1708 bytes)
	I0507 19:00:05.570074    7688 start.go:296] duration metric: took 4.6012438s for postStartSetup
	I0507 19:00:05.580400    7688 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0507 19:00:05.580400    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
	I0507 19:00:07.569273    7688 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:00:07.569273    7688 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:00:07.569776    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 19:00:09.932111    7688 main.go:141] libmachine: [stdout =====>] : 172.19.143.44
	
	I0507 19:00:09.932111    7688 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:00:09.932665    7688 sshutil.go:53] new ssh client: &{IP:172.19.143.44 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800-m02\id_rsa Username:docker}
	I0507 19:00:10.038122    7688 ssh_runner.go:235] Completed: sudo ls --almost-all -1 /var/lib/minikube/backup: (4.4574263s)
	I0507 19:00:10.038122    7688 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0507 19:00:10.047099    7688 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0507 19:00:10.120364    7688 fix.go:56] duration metric: took 1m24.8703865s for fixHost
	I0507 19:00:10.120364    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
	I0507 19:00:12.094101    7688 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:00:12.094101    7688 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:00:12.094309    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 19:00:14.456347    7688 main.go:141] libmachine: [stdout =====>] : 172.19.143.44
	
	I0507 19:00:14.456347    7688 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:00:14.460611    7688 main.go:141] libmachine: Using SSH client type: native
	I0507 19:00:14.460611    7688 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.143.44 22 <nil> <nil>}
	I0507 19:00:14.460611    7688 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0507 19:00:14.588550    7688 main.go:141] libmachine: SSH cmd err, output: <nil>: 1715108414.825055047
	
	I0507 19:00:14.588550    7688 fix.go:216] guest clock: 1715108414.825055047
	I0507 19:00:14.588550    7688 fix.go:229] Guest: 2024-05-07 19:00:14.825055047 +0000 UTC Remote: 2024-05-07 19:00:10.1203644 +0000 UTC m=+86.941139301 (delta=4.704690647s)
	I0507 19:00:14.588550    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
	I0507 19:00:16.560162    7688 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:00:16.560540    7688 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:00:16.560707    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 19:00:18.968607    7688 main.go:141] libmachine: [stdout =====>] : 172.19.143.44
	
	I0507 19:00:18.968607    7688 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:00:18.973035    7688 main.go:141] libmachine: Using SSH client type: native
	I0507 19:00:18.973468    7688 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.143.44 22 <nil> <nil>}
	I0507 19:00:18.973468    7688 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1715108414
	I0507 19:00:19.114443    7688 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue May  7 19:00:14 UTC 2024
	
	I0507 19:00:19.114443    7688 fix.go:236] clock set: Tue May  7 19:00:14 UTC 2024
	 (err=<nil>)
	I0507 19:00:19.114443    7688 start.go:83] releasing machines lock for "ha-210800-m02", held for 1m33.863868s
	I0507 19:00:19.115738    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
	I0507 19:00:21.049499    7688 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:00:21.049499    7688 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:00:21.049499    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 19:00:23.414606    7688 main.go:141] libmachine: [stdout =====>] : 172.19.143.44
	
	I0507 19:00:23.414606    7688 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:00:23.418767    7688 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0507 19:00:23.418849    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
	I0507 19:00:23.426440    7688 ssh_runner.go:195] Run: systemctl --version
	I0507 19:00:23.426440    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
	I0507 19:00:25.416717    7688 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:00:25.416807    7688 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:00:25.416955    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 19:00:25.431028    7688 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:00:25.431028    7688 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:00:25.431028    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 19:00:27.884011    7688 main.go:141] libmachine: [stdout =====>] : 172.19.143.44
	
	I0507 19:00:27.884011    7688 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:00:27.885068    7688 sshutil.go:53] new ssh client: &{IP:172.19.143.44 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800-m02\id_rsa Username:docker}
	I0507 19:00:27.903243    7688 main.go:141] libmachine: [stdout =====>] : 172.19.143.44
	
	I0507 19:00:27.903243    7688 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:00:27.903243    7688 sshutil.go:53] new ssh client: &{IP:172.19.143.44 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800-m02\id_rsa Username:docker}
	I0507 19:00:28.046256    7688 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.6271829s)
	I0507 19:00:28.046256    7688 ssh_runner.go:235] Completed: systemctl --version: (4.6195098s)
	I0507 19:00:28.056409    7688 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0507 19:00:28.065952    7688 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0507 19:00:28.074274    7688 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0507 19:00:28.102288    7688 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0507 19:00:28.102424    7688 start.go:494] detecting cgroup driver to use...
	I0507 19:00:28.102578    7688 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0507 19:00:28.144374    7688 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0507 19:00:28.170640    7688 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0507 19:00:28.188589    7688 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0507 19:00:28.196731    7688 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0507 19:00:28.226332    7688 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0507 19:00:28.256182    7688 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0507 19:00:28.284352    7688 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0507 19:00:28.312094    7688 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0507 19:00:28.339813    7688 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0507 19:00:28.367523    7688 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0507 19:00:28.393699    7688 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0507 19:00:28.420346    7688 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0507 19:00:28.444928    7688 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0507 19:00:28.471952    7688 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 19:00:28.655503    7688 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0507 19:00:28.694050    7688 start.go:494] detecting cgroup driver to use...
	I0507 19:00:28.706188    7688 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0507 19:00:28.737968    7688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0507 19:00:28.766290    7688 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0507 19:00:28.799362    7688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0507 19:00:28.838359    7688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0507 19:00:28.867366    7688 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0507 19:00:28.922960    7688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0507 19:00:28.947506    7688 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0507 19:00:28.993543    7688 ssh_runner.go:195] Run: which cri-dockerd
	I0507 19:00:29.007933    7688 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0507 19:00:29.024453    7688 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0507 19:00:29.063290    7688 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0507 19:00:29.248340    7688 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0507 19:00:29.427404    7688 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0507 19:00:29.427404    7688 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0507 19:00:29.469046    7688 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 19:00:29.660321    7688 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0507 19:00:32.267041    7688 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6064831s)
	I0507 19:00:32.276353    7688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0507 19:00:32.305912    7688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0507 19:00:32.336283    7688 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0507 19:00:32.538037    7688 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0507 19:00:32.725134    7688 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 19:00:32.914772    7688 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0507 19:00:32.949268    7688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0507 19:00:32.981577    7688 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 19:00:33.179060    7688 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0507 19:00:33.296393    7688 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0507 19:00:33.306277    7688 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0507 19:00:33.313831    7688 start.go:562] Will wait 60s for crictl version
	I0507 19:00:33.324493    7688 ssh_runner.go:195] Run: which crictl
	I0507 19:00:33.338515    7688 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0507 19:00:33.398553    7688 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0507 19:00:33.409588    7688 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0507 19:00:33.452123    7688 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0507 19:00:33.486861    7688 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0507 19:00:33.487055    7688 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0507 19:00:33.492796    7688 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0507 19:00:33.492796    7688 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0507 19:00:33.492796    7688 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0507 19:00:33.492796    7688 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:a3:a5:4f Flags:up|broadcast|multicast|running}
	I0507 19:00:33.495599    7688 ip.go:210] interface addr: fe80::1edb:f5fd:c218:d8d2/64
	I0507 19:00:33.495599    7688 ip.go:210] interface addr: 172.19.128.1/20
	I0507 19:00:33.503644    7688 ssh_runner.go:195] Run: grep 172.19.128.1	host.minikube.internal$ /etc/hosts
	I0507 19:00:33.510582    7688 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.128.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0507 19:00:33.530557    7688 mustload.go:65] Loading cluster: ha-210800
	I0507 19:00:33.532229    7688 config.go:182] Loaded profile config "ha-210800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 19:00:33.532947    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800 ).state
	I0507 19:00:35.510845    7688 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:00:35.510845    7688 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:00:35.510845    7688 host.go:66] Checking if "ha-210800" exists ...
	I0507 19:00:35.511781    7688 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800 for IP: 172.19.143.44
	I0507 19:00:35.511781    7688 certs.go:194] generating shared ca certs ...
	I0507 19:00:35.511781    7688 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 19:00:35.512675    7688 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0507 19:00:35.513139    7688 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0507 19:00:35.513402    7688 certs.go:256] generating profile certs ...
	I0507 19:00:35.513938    7688 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\client.key
	I0507 19:00:35.514089    7688 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.key.7bc7bc9f
	I0507 19:00:35.514184    7688 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.crt.7bc7bc9f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.19.132.69 172.19.143.44 172.19.137.224 172.19.143.254]
	I0507 19:00:35.650650    7688 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.crt.7bc7bc9f ...
	I0507 19:00:35.650650    7688 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.crt.7bc7bc9f: {Name:mkb3c429209752ce2d72d0e064f069647bcac036 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 19:00:35.652902    7688 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.key.7bc7bc9f ...
	I0507 19:00:35.652992    7688 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.key.7bc7bc9f: {Name:mk8ecd6be39ee084948670d74e33e85d0cb8d730 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 19:00:35.654397    7688 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.crt.7bc7bc9f -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.crt
	I0507 19:00:35.666042    7688 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.key.7bc7bc9f -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.key
	I0507 19:00:35.666719    7688 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\proxy-client.key
	I0507 19:00:35.666719    7688 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0507 19:00:35.667069    7688 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0507 19:00:35.667196    7688 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0507 19:00:35.667196    7688 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0507 19:00:35.667196    7688 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0507 19:00:35.667196    7688 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0507 19:00:35.667840    7688 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0507 19:00:35.668116    7688 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0507 19:00:35.668266    7688 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\9992.pem (1338 bytes)
	W0507 19:00:35.668638    7688 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\9992_empty.pem, impossibly tiny 0 bytes
	I0507 19:00:35.668786    7688 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0507 19:00:35.668983    7688 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0507 19:00:35.668983    7688 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0507 19:00:35.668983    7688 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0507 19:00:35.668983    7688 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem (1708 bytes)
	I0507 19:00:35.669725    7688 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\9992.pem -> /usr/share/ca-certificates/9992.pem
	I0507 19:00:35.669850    7688 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem -> /usr/share/ca-certificates/99922.pem
	I0507 19:00:35.669917    7688 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0507 19:00:35.670230    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800 ).state
	I0507 19:00:37.635210    7688 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:00:37.635287    7688 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:00:37.635357    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800 ).networkadapters[0]).ipaddresses[0]
	I0507 19:00:39.938184    7688 main.go:141] libmachine: [stdout =====>] : 172.19.132.69
	
	I0507 19:00:39.938184    7688 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:00:39.938184    7688 sshutil.go:53] new ssh client: &{IP:172.19.132.69 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800\id_rsa Username:docker}
	I0507 19:00:40.043940    7688 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0507 19:00:40.047831    7688 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0507 19:00:40.080544    7688 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0507 19:00:40.087424    7688 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0507 19:00:40.113535    7688 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0507 19:00:40.120566    7688 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0507 19:00:40.148773    7688 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0507 19:00:40.154633    7688 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0507 19:00:40.181350    7688 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0507 19:00:40.187497    7688 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0507 19:00:40.216758    7688 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0507 19:00:40.223451    7688 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0507 19:00:40.242486    7688 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0507 19:00:40.293086    7688 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0507 19:00:40.339256    7688 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0507 19:00:40.383305    7688 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0507 19:00:40.429567    7688 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0507 19:00:40.476667    7688 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0507 19:00:40.520546    7688 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0507 19:00:40.565107    7688 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0507 19:00:40.608228    7688 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\9992.pem --> /usr/share/ca-certificates/9992.pem (1338 bytes)
	I0507 19:00:40.650385    7688 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem --> /usr/share/ca-certificates/99922.pem (1708 bytes)
	I0507 19:00:40.693634    7688 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0507 19:00:40.737100    7688 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0507 19:00:40.767114    7688 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0507 19:00:40.797983    7688 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0507 19:00:40.829467    7688 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0507 19:00:40.860808    7688 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0507 19:00:40.894001    7688 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0507 19:00:40.924692    7688 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0507 19:00:40.966330    7688 ssh_runner.go:195] Run: openssl version
	I0507 19:00:40.983729    7688 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9992.pem && ln -fs /usr/share/ca-certificates/9992.pem /etc/ssl/certs/9992.pem"
	I0507 19:00:41.012028    7688 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9992.pem
	I0507 19:00:41.018888    7688 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  7 18:15 /usr/share/ca-certificates/9992.pem
	I0507 19:00:41.026880    7688 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9992.pem
	I0507 19:00:41.043569    7688 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9992.pem /etc/ssl/certs/51391683.0"
	I0507 19:00:41.072855    7688 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/99922.pem && ln -fs /usr/share/ca-certificates/99922.pem /etc/ssl/certs/99922.pem"
	I0507 19:00:41.100178    7688 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/99922.pem
	I0507 19:00:41.107139    7688 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  7 18:15 /usr/share/ca-certificates/99922.pem
	I0507 19:00:41.115171    7688 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/99922.pem
	I0507 19:00:41.130848    7688 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/99922.pem /etc/ssl/certs/3ec20f2e.0"
	I0507 19:00:41.158509    7688 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0507 19:00:41.185638    7688 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0507 19:00:41.193161    7688 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  7 18:01 /usr/share/ca-certificates/minikubeCA.pem
	I0507 19:00:41.201555    7688 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0507 19:00:41.218078    7688 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0507 19:00:41.249926    7688 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0507 19:00:41.266367    7688 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0507 19:00:41.284578    7688 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0507 19:00:41.301642    7688 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0507 19:00:41.319545    7688 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0507 19:00:41.337629    7688 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0507 19:00:41.355354    7688 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0507 19:00:41.365203    7688 kubeadm.go:928] updating node {m02 172.19.143.44 8443 v1.30.0 docker true true} ...
	I0507 19:00:41.365512    7688 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-210800-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.143.44
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-210800 Namespace:default APIServerHAVIP:172.19.143.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0507 19:00:41.365512    7688 kube-vip.go:111] generating kube-vip config ...
	I0507 19:00:41.373697    7688 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0507 19:00:41.399863    7688 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0507 19:00:41.399863    7688 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.19.143.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0507 19:00:41.407688    7688 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0507 19:00:41.429644    7688 binaries.go:44] Found k8s binaries, skipping transfer
	I0507 19:00:41.439263    7688 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0507 19:00:41.458801    7688 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0507 19:00:41.489422    7688 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0507 19:00:41.520196    7688 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0507 19:00:41.560570    7688 ssh_runner.go:195] Run: grep 172.19.143.254	control-plane.minikube.internal$ /etc/hosts
	I0507 19:00:41.566779    7688 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.143.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0507 19:00:41.597764    7688 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 19:00:41.782797    7688 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0507 19:00:41.814462    7688 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.19.143.44 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0507 19:00:41.819335    7688 out.go:177] * Verifying Kubernetes components...
	I0507 19:00:41.814462    7688 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0507 19:00:41.815093    7688 config.go:182] Loaded profile config "ha-210800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 19:00:41.825774    7688 out.go:177] * Enabled addons: 
	I0507 19:00:41.827920    7688 addons.go:505] duration metric: took 13.4576ms for enable addons: enabled=[]
	I0507 19:00:41.835189    7688 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 19:00:42.037273    7688 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0507 19:00:42.071406    7688 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0507 19:00:42.072189    7688 kapi.go:59] client config for ha-210800: &rest.Config{Host:"https://172.19.143.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-210800\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-210800\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2655b00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0507 19:00:42.072299    7688 kubeadm.go:477] Overriding stale ClientConfig host https://172.19.143.254:8443 with https://172.19.132.69:8443
	I0507 19:00:42.073479    7688 cert_rotation.go:137] Starting client certificate rotation controller
	I0507 19:00:42.073479    7688 node_ready.go:35] waiting up to 6m0s for node "ha-210800-m02" to be "Ready" ...
	I0507 19:00:42.074213    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:00:42.074302    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:42.074302    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:42.074302    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:42.091599    7688 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0507 19:00:42.588746    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:00:42.588864    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:42.588864    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:42.588864    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:42.592713    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:00:42.593669    7688 node_ready.go:49] node "ha-210800-m02" has status "Ready":"True"
	I0507 19:00:42.593669    7688 node_ready.go:38] duration metric: took 519.6084ms for node "ha-210800-m02" to be "Ready" ...
	I0507 19:00:42.593669    7688 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0507 19:00:42.593862    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods
	I0507 19:00:42.593897    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:42.593897    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:42.593920    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:42.602094    7688 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0507 19:00:42.613674    7688 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-cr9nn" in "kube-system" namespace to be "Ready" ...
	I0507 19:00:42.613674    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-cr9nn
	I0507 19:00:42.613674    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:42.613674    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:42.613674    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:42.617717    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:00:42.618794    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800
	I0507 19:00:42.618865    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:42.618865    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:42.618865    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:42.622096    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:00:42.623273    7688 pod_ready.go:92] pod "coredns-7db6d8ff4d-cr9nn" in "kube-system" namespace has status "Ready":"True"
	I0507 19:00:42.623273    7688 pod_ready.go:81] duration metric: took 9.5986ms for pod "coredns-7db6d8ff4d-cr9nn" in "kube-system" namespace to be "Ready" ...
	I0507 19:00:42.623273    7688 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-dxsqf" in "kube-system" namespace to be "Ready" ...
	I0507 19:00:42.623273    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-dxsqf
	I0507 19:00:42.623273    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:42.623273    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:42.623273    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:42.627607    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:00:42.628378    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800
	I0507 19:00:42.628409    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:42.628409    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:42.628449    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:42.632093    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:00:42.633114    7688 pod_ready.go:92] pod "coredns-7db6d8ff4d-dxsqf" in "kube-system" namespace has status "Ready":"True"
	I0507 19:00:42.633114    7688 pod_ready.go:81] duration metric: took 9.8403ms for pod "coredns-7db6d8ff4d-dxsqf" in "kube-system" namespace to be "Ready" ...
	I0507 19:00:42.633114    7688 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-210800" in "kube-system" namespace to be "Ready" ...
	I0507 19:00:42.633200    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800
	I0507 19:00:42.633273    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:42.633273    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:42.633273    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:42.639801    7688 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0507 19:00:42.639801    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800
	I0507 19:00:42.639801    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:42.639801    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:42.639801    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:42.644122    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:00:42.644820    7688 pod_ready.go:92] pod "etcd-ha-210800" in "kube-system" namespace has status "Ready":"True"
	I0507 19:00:42.644820    7688 pod_ready.go:81] duration metric: took 11.6192ms for pod "etcd-ha-210800" in "kube-system" namespace to be "Ready" ...
	I0507 19:00:42.644820    7688 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-210800-m02" in "kube-system" namespace to be "Ready" ...
	I0507 19:00:42.645389    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:00:42.645389    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:42.645389    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:42.645389    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:42.648949    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:00:42.648949    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:00:42.648949    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:42.648949    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:42.648949    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:42.654031    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 19:00:43.152486    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:00:43.152486    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:43.152486    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:43.152486    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:43.156820    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:00:43.157826    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:00:43.157826    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:43.157826    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:43.157826    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:43.162957    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 19:00:43.645604    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:00:43.645604    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:43.645604    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:43.645682    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:43.654577    7688 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0507 19:00:43.656251    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:00:43.656283    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:43.656283    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:43.656305    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:43.704534    7688 round_trippers.go:574] Response Status: 200 OK in 48 milliseconds
	I0507 19:00:44.151753    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:00:44.151753    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:44.151753    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:44.151753    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:44.156329    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:00:44.157443    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:00:44.157443    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:44.157443    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:44.157443    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:44.162664    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 19:00:44.656034    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:00:44.656034    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:44.656034    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:44.656110    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:44.660224    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:00:44.662129    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:00:44.662189    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:44.662189    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:44.662189    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:44.666268    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:00:44.667628    7688 pod_ready.go:102] pod "etcd-ha-210800-m02" in "kube-system" namespace has status "Ready":"False"
	I0507 19:00:45.147281    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:00:45.147499    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:45.147499    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:45.147499    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:45.152129    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:00:45.153080    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:00:45.153080    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:45.153080    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:45.153080    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:45.157094    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:00:45.651735    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:00:45.651792    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:45.651792    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:45.651792    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:45.674125    7688 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0507 19:00:45.675034    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:00:45.675034    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:45.675034    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:45.675109    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:45.682386    7688 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0507 19:00:46.156297    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:00:46.156297    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:46.156297    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:46.156297    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:46.161386    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 19:00:46.162371    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:00:46.162435    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:46.162435    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:46.162435    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:46.165742    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:00:46.648635    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:00:46.648635    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:46.648635    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:46.648635    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:46.656191    7688 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0507 19:00:46.657987    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:00:46.657987    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:46.657987    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:46.657987    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:46.663429    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 19:00:47.158042    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:00:47.158042    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:47.158042    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:47.158042    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:47.162611    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:00:47.163374    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:00:47.163374    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:47.163374    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:47.163374    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:47.166941    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:00:47.168541    7688 pod_ready.go:102] pod "etcd-ha-210800-m02" in "kube-system" namespace has status "Ready":"False"
	I0507 19:00:47.660291    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:00:47.660291    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:47.660397    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:47.660397    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:47.664577    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:00:47.666314    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:00:47.666314    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:47.666314    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:47.666314    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:47.670630    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:00:48.160120    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:00:48.160368    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:48.160368    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:48.160368    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:48.173207    7688 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0507 19:00:48.175439    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:00:48.175439    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:48.175533    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:48.175533    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:48.184810    7688 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0507 19:00:48.655465    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:00:48.655465    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:48.655465    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:48.655465    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:48.660489    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 19:00:48.661953    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:00:48.661953    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:48.661953    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:48.661953    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:48.668762    7688 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0507 19:00:49.157673    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:00:49.157751    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:49.157751    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:49.157751    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:49.162074    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:00:49.167505    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:00:49.167505    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:49.167505    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:49.167596    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:49.172179    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:00:49.173160    7688 pod_ready.go:102] pod "etcd-ha-210800-m02" in "kube-system" namespace has status "Ready":"False"
	I0507 19:00:49.657849    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:00:49.658083    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:49.658083    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:49.658083    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:49.663438    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 19:00:49.664726    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:00:49.664726    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:49.664726    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:49.664726    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:49.669651    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:00:50.153960    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:00:50.154160    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:50.154160    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:50.154160    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:50.159949    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 19:00:50.161897    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:00:50.161897    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:50.161897    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:50.161897    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:50.167349    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 19:00:50.655470    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:00:50.655573    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:50.655573    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:50.655573    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:50.660354    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:00:50.661391    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:00:50.661933    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:50.662036    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:50.662036    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:50.667759    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 19:00:51.152865    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:00:51.152936    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:51.153006    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:51.153006    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:51.159427    7688 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0507 19:00:51.160382    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:00:51.161271    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:51.161309    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:51.161309    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:51.165122    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:00:51.652511    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:00:51.652579    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:51.652579    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:51.652648    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:51.661350    7688 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0507 19:00:51.662514    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:00:51.662556    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:51.662556    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:51.662586    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:51.666739    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:00:51.667878    7688 pod_ready.go:102] pod "etcd-ha-210800-m02" in "kube-system" namespace has status "Ready":"False"
	I0507 19:00:52.153944    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:00:52.153944    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:52.153944    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:52.153944    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:52.162689    7688 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0507 19:00:52.164386    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:00:52.164386    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:52.164386    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:52.164523    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:52.167686    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:00:52.657393    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:00:52.657485    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:52.657485    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:52.657485    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:52.662638    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 19:00:52.664303    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:00:52.664303    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:52.664402    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:52.664402    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:52.669852    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 19:00:53.156221    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:00:53.156221    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:53.156221    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:53.156221    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:53.160502    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:00:53.162077    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:00:53.162077    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:53.162077    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:53.162077    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:53.166347    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:00:53.646814    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:00:53.646876    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:53.646876    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:53.646876    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:53.652287    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:00:53.653276    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:00:53.653276    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:53.653276    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:53.653341    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:53.656359    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:00:54.149496    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:00:54.149496    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:54.149496    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:54.149496    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:54.154697    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:00:54.155488    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:00:54.155488    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:54.155488    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:54.155488    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:54.161110    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 19:00:54.162281    7688 pod_ready.go:102] pod "etcd-ha-210800-m02" in "kube-system" namespace has status "Ready":"False"
	I0507 19:00:54.648278    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:00:54.648398    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:54.648398    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:54.648398    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:54.653832    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 19:00:54.655450    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:00:54.655450    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:54.655450    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:54.655450    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:54.663366    7688 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0507 19:00:55.150842    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:00:55.150842    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:55.150842    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:55.150842    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:55.157398    7688 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0507 19:00:55.159107    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:00:55.159107    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:55.159237    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:55.159237    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:55.164012    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:00:55.652905    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:00:55.652990    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:55.652990    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:55.652990    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:55.659306    7688 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0507 19:00:55.659910    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:00:55.659910    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:55.659910    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:55.659910    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:55.663480    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:00:56.153600    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:00:56.153600    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:56.153600    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:56.153600    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:56.161470    7688 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0507 19:00:56.163815    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:00:56.163873    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:56.163923    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:56.163983    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:56.168252    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:00:56.170549    7688 pod_ready.go:102] pod "etcd-ha-210800-m02" in "kube-system" namespace has status "Ready":"False"
	I0507 19:00:56.655232    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:00:56.655516    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:56.655516    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:56.655516    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:56.662935    7688 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0507 19:00:56.665105    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:00:56.665216    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:56.665216    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:56.665216    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:56.670462    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 19:00:57.159468    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:00:57.159781    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:57.159781    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:57.159781    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:57.165092    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 19:00:57.166268    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:00:57.166268    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:57.166268    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:57.166268    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:57.171114    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:00:57.656653    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:00:57.656653    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:57.656653    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:57.656653    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:57.660962    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:00:57.662545    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:00:57.662616    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:57.662616    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:57.662616    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:57.666942    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:00:58.159723    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:00:58.159723    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:58.159723    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:58.159723    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:58.168917    7688 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0507 19:00:58.170511    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:00:58.170623    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:58.170623    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:58.170697    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:58.174672    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:00:58.175888    7688 pod_ready.go:102] pod "etcd-ha-210800-m02" in "kube-system" namespace has status "Ready":"False"
	I0507 19:00:58.660937    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:00:58.660937    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:58.660937    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:58.660937    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:58.664703    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:00:58.667098    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:00:58.667172    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:58.667172    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:58.667172    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:58.671627    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:00:59.148393    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:00:59.148393    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:59.148472    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:59.148472    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:59.157571    7688 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0507 19:00:59.158745    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:00:59.158745    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:59.158745    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:59.158745    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:59.162444    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:00:59.647174    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:00:59.647174    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:59.647440    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:59.647440    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:59.652292    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:00:59.653392    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:00:59.653486    7688 round_trippers.go:469] Request Headers:
	I0507 19:00:59.653486    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:00:59.653486    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:00:59.658258    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:01:00.161349    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:00.161427    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:00.161427    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:00.161427    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:00.169401    7688 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0507 19:01:00.170910    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:00.170968    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:00.171025    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:00.171025    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:00.175692    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:01:00.176303    7688 pod_ready.go:102] pod "etcd-ha-210800-m02" in "kube-system" namespace has status "Ready":"False"
	I0507 19:01:00.655677    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:00.655677    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:00.655677    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:00.655677    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:00.660434    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:01:00.662140    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:00.662140    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:00.662140    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:00.662274    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:00.666620    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:01:01.155627    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:01.155627    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:01.155710    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:01.155710    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:01.162027    7688 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0507 19:01:01.162925    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:01.162925    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:01.162925    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:01.162925    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:01.167044    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:01:01.656881    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:01.656881    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:01.656881    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:01.656881    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:01.664821    7688 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0507 19:01:01.665828    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:01.665828    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:01.665828    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:01.665828    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:01.669925    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:01:02.159036    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:02.159177    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:02.159177    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:02.159177    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:02.163500    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:01:02.164794    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:02.164794    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:02.164794    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:02.164858    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:02.168281    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:01:02.648133    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:02.648215    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:02.648215    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:02.648215    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:02.653766    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 19:01:02.654639    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:02.654639    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:02.654639    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:02.654639    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:02.658932    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:01:02.659680    7688 pod_ready.go:102] pod "etcd-ha-210800-m02" in "kube-system" namespace has status "Ready":"False"
	I0507 19:01:03.148669    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:03.148669    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:03.148797    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:03.148797    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:03.153867    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 19:01:03.154835    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:03.154835    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:03.154835    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:03.154835    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:03.161882    7688 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0507 19:01:03.646696    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:03.646696    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:03.646696    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:03.646696    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:03.654549    7688 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0507 19:01:03.655603    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:03.655603    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:03.655603    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:03.655603    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:03.659158    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:01:04.147491    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:04.147641    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:04.147718    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:04.147718    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:04.153027    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 19:01:04.154733    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:04.154790    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:04.154790    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:04.154790    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:04.158557    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:01:04.648308    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:04.648308    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:04.648308    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:04.648308    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:04.652447    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:01:04.654460    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:04.654460    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:04.654460    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:04.654460    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:04.657673    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:01:05.152400    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:05.152522    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:05.152522    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:05.152522    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:05.157135    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:01:05.157736    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:05.157736    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:05.157736    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:05.157736    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:05.161307    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:01:05.162004    7688 pod_ready.go:102] pod "etcd-ha-210800-m02" in "kube-system" namespace has status "Ready":"False"
	I0507 19:01:05.657082    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:05.657082    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:05.657082    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:05.657082    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:05.662648    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 19:01:05.663519    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:05.663519    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:05.663519    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:05.663519    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:05.667174    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:01:06.155758    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:06.155826    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:06.155826    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:06.155826    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:06.160409    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:01:06.162298    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:06.162387    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:06.162425    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:06.162450    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:06.165729    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:01:06.658525    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:06.658525    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:06.658525    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:06.658525    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:06.662138    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:01:06.663649    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:06.663736    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:06.663736    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:06.663736    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:06.668446    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:01:07.157115    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:07.157115    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:07.157115    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:07.157115    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:07.161716    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:01:07.162923    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:07.162923    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:07.162923    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:07.162923    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:07.167104    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:01:07.168472    7688 pod_ready.go:102] pod "etcd-ha-210800-m02" in "kube-system" namespace has status "Ready":"False"
	I0507 19:01:07.662030    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:07.662030    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:07.662030    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:07.662030    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:07.666685    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:01:07.668854    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:07.668854    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:07.668854    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:07.668854    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:07.673429    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:01:08.147487    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:08.147487    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:08.147735    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:08.147735    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:08.151504    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:01:08.153179    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:08.153179    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:08.153179    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:08.153179    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:08.157079    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:01:08.660864    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:08.660864    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:08.660864    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:08.660864    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:08.665481    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:01:08.666997    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:08.666997    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:08.667054    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:08.667054    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:08.675059    7688 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0507 19:01:09.161310    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:09.161310    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:09.161310    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:09.161310    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:09.166902    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 19:01:09.168013    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:09.168083    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:09.168083    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:09.168083    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:09.172754    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:01:09.173369    7688 pod_ready.go:102] pod "etcd-ha-210800-m02" in "kube-system" namespace has status "Ready":"False"
	I0507 19:01:09.659489    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:09.659489    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:09.659489    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:09.659489    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:09.664146    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:01:09.666221    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:09.666221    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:09.666315    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:09.666315    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:09.670552    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:01:10.159020    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:10.159020    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:10.159020    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:10.159020    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:10.163490    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:01:10.165245    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:10.165245    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:10.165245    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:10.165351    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:10.168667    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:01:10.661756    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:10.661756    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:10.661756    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:10.661756    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:10.666706    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:01:10.669057    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:10.669145    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:10.669145    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:10.669145    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:10.674796    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 19:01:11.160384    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:11.160384    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:11.160384    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:11.160499    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:11.165398    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:01:11.167401    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:11.167401    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:11.167401    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:11.167401    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:11.171683    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:01:11.661216    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:11.661480    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:11.661480    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:11.661480    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:11.665237    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:01:11.667509    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:11.667509    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:11.667571    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:11.667571    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:11.674541    7688 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0507 19:01:11.674541    7688 pod_ready.go:102] pod "etcd-ha-210800-m02" in "kube-system" namespace has status "Ready":"False"
	I0507 19:01:12.147267    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:12.147267    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:12.147267    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:12.147267    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:12.152968    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 19:01:12.155044    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:12.155108    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:12.155108    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:12.155108    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:12.166316    7688 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0507 19:01:12.648313    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:12.648313    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:12.648313    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:12.648313    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:12.653880    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:01:12.655050    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:12.655136    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:12.655136    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:12.655136    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:12.661667    7688 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0507 19:01:13.149774    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:13.149845    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:13.149845    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:13.149845    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:13.155140    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 19:01:13.156282    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:13.156282    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:13.156282    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:13.156282    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:13.160463    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:01:13.648566    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:13.648566    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:13.648633    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:13.648633    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:13.653490    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:01:13.654661    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:13.654731    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:13.654731    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:13.654731    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:13.660996    7688 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0507 19:01:14.150667    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:14.150667    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:14.150667    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:14.150667    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:14.155377    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:01:14.157194    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:14.157247    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:14.157247    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:14.157247    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:14.161567    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:01:14.162235    7688 pod_ready.go:102] pod "etcd-ha-210800-m02" in "kube-system" namespace has status "Ready":"False"
	I0507 19:01:14.652779    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:14.652779    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:14.652863    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:14.652863    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:14.658655    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 19:01:14.659984    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:14.660094    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:14.660094    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:14.660094    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:14.664907    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:01:15.161275    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:15.161275    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:15.161275    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:15.161275    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:15.167148    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 19:01:15.167808    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:15.167808    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:15.167808    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:15.167808    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:15.171760    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:01:15.650070    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:15.650070    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:15.650070    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:15.650070    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:15.655659    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 19:01:15.657043    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:15.657043    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:15.657043    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:15.657043    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:15.659636    7688 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:01:16.148048    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:16.148048    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:16.148121    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:16.148121    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:16.155384    7688 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0507 19:01:16.156398    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:16.156398    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:16.156398    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:16.156398    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:16.161028    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:01:16.655589    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:16.655589    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:16.655589    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:16.655736    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:16.660388    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:01:16.662523    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:16.662523    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:16.662523    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:16.662523    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:16.670258    7688 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0507 19:01:16.671250    7688 pod_ready.go:102] pod "etcd-ha-210800-m02" in "kube-system" namespace has status "Ready":"False"
	I0507 19:01:17.154464    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:17.154464    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:17.154464    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:17.154817    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:17.159065    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:01:17.160185    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:17.160185    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:17.160185    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:17.160185    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:17.164059    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:01:17.653562    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:17.653562    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:17.653722    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:17.653722    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:17.658673    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:01:17.660083    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:17.660083    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:17.660146    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:17.660146    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:17.666725    7688 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0507 19:01:18.153083    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:18.153179    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:18.153241    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:18.153241    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:18.158964    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 19:01:18.159880    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:18.159947    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:18.159947    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:18.159947    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:18.163684    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:01:18.655345    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:18.655597    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:18.655597    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:18.655597    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:18.660194    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:01:18.662363    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:18.662363    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:18.662456    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:18.662456    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:18.666827    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:01:19.155182    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:19.155182    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:19.155182    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:19.155182    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:19.159930    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:01:19.161711    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:19.161764    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:19.161834    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:19.161834    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:19.165064    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:01:19.166588    7688 pod_ready.go:102] pod "etcd-ha-210800-m02" in "kube-system" namespace has status "Ready":"False"
	I0507 19:01:19.652685    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:19.652800    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:19.652800    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:19.652800    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:19.657266    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:01:19.659262    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:19.659262    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:19.659262    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:19.659262    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:19.663826    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:01:20.149935    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:20.149935    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:20.150395    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:20.150395    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:20.155486    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 19:01:20.156722    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:20.156722    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:20.156722    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:20.156722    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:20.160330    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:01:20.663087    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:20.663087    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:20.663087    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:20.663087    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:20.667470    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:01:20.669907    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:20.669907    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:20.669907    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:20.669907    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:20.675218    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 19:01:21.161963    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:21.161963    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:21.161963    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:21.161963    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:21.166956    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:01:21.167572    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:21.167572    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:21.167572    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:21.167572    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:21.171194    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:01:21.173047    7688 pod_ready.go:102] pod "etcd-ha-210800-m02" in "kube-system" namespace has status "Ready":"False"
	I0507 19:01:21.661874    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:21.661874    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:21.661874    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:21.661874    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:21.666262    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:01:21.667591    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:21.667655    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:21.667655    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:21.667655    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:21.670966    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:01:22.147909    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:22.148182    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:22.148182    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:22.148182    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:22.153415    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:01:22.154900    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:22.154952    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:22.154952    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:22.154952    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:22.162174    7688 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0507 19:01:22.654704    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:22.654704    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:22.654704    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:22.654704    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:22.659834    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:01:22.660537    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:22.660537    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:22.660537    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:22.660537    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:22.665175    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:01:23.163039    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:23.163152    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:23.163152    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:23.163152    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:23.167489    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:01:23.168702    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:23.168702    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:23.168702    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:23.168702    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:23.172898    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:01:23.173887    7688 pod_ready.go:102] pod "etcd-ha-210800-m02" in "kube-system" namespace has status "Ready":"False"
	I0507 19:01:23.649428    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:23.649428    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:23.649428    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:23.649428    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:23.654498    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 19:01:23.655678    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:23.655776    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:23.655776    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:23.655776    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:23.659848    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:01:24.154343    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:24.154426    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:24.154426    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:24.154426    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:24.162273    7688 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0507 19:01:24.163180    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:24.163339    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:24.163339    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:24.163373    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:24.167398    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:01:24.656267    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:24.656368    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:24.656368    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:24.656368    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:24.664297    7688 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0507 19:01:24.665215    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:24.665215    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:24.665215    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:24.665215    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:24.671846    7688 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0507 19:01:25.156401    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:25.156604    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:25.156604    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:25.156604    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:25.166822    7688 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0507 19:01:25.167257    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:25.167257    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:25.167257    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:25.167257    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:25.170840    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:01:25.655841    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:25.655841    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:25.655841    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:25.655942    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:25.660869    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:01:25.661713    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:25.661713    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:25.661713    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:25.661713    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:25.666056    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:01:25.666889    7688 pod_ready.go:102] pod "etcd-ha-210800-m02" in "kube-system" namespace has status "Ready":"False"
	I0507 19:01:26.154820    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:26.154882    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:26.154882    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:26.154882    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:26.159244    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:01:26.160914    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:26.160914    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:26.160914    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:26.160914    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:26.169838    7688 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0507 19:01:26.654674    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:26.654801    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:26.654801    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:26.654801    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:26.659722    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:01:26.660467    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:26.660467    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:26.660467    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:26.660467    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:26.663704    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:01:27.154573    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:27.154742    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:27.154742    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:27.154742    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:27.160230    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 19:01:27.161254    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:27.161337    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:27.161337    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:27.161337    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:27.165551    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:01:27.652556    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:27.652611    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:27.652679    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:27.652679    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:27.662982    7688 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0507 19:01:27.664736    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:27.664736    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:27.664736    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:27.664736    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:27.669943    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 19:01:27.670352    7688 pod_ready.go:102] pod "etcd-ha-210800-m02" in "kube-system" namespace has status "Ready":"False"
	I0507 19:01:28.151149    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:28.151149    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:28.151149    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:28.151149    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:28.155879    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:01:28.157567    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:28.157567    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:28.157653    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:28.157653    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:28.162683    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 19:01:28.651858    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:28.651858    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:28.651858    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:28.651858    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:28.657104    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:01:28.657794    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:28.657794    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:28.657794    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:28.657941    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:28.666434    7688 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0507 19:01:29.150960    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:29.151031    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:29.151031    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:29.151031    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:29.155811    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:01:29.156671    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:29.156671    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:29.156671    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:29.156671    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:29.162599    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 19:01:29.652486    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:29.652486    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:29.652486    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:29.652579    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:29.657660    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:01:29.658318    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:29.658318    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:29.658318    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:29.658318    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:29.662357    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:01:30.151350    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:30.151635    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:30.151635    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:30.151635    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:30.160145    7688 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0507 19:01:30.161358    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:30.161410    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:30.161410    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:30.161410    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:30.165636    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:01:30.167258    7688 pod_ready.go:102] pod "etcd-ha-210800-m02" in "kube-system" namespace has status "Ready":"False"
	I0507 19:01:30.655217    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:30.655217    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:30.655329    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:30.655329    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:30.662366    7688 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0507 19:01:30.663525    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:30.663586    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:30.663586    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:30.663586    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:30.667704    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:01:31.157293    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:31.157293    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:31.157293    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:31.157293    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:31.161888    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:01:31.163643    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:31.163643    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:31.163643    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:31.163643    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:31.169378    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 19:01:31.659105    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:31.659105    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:31.659105    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:31.659105    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:31.666334    7688 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0507 19:01:31.667371    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:31.667371    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:31.667371    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:31.667371    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:31.672310    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:01:32.160753    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:32.161020    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:32.161057    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:32.161057    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:32.168635    7688 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0507 19:01:32.170228    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:32.170283    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:32.170283    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:32.170283    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:32.174645    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:01:32.176073    7688 pod_ready.go:102] pod "etcd-ha-210800-m02" in "kube-system" namespace has status "Ready":"False"
	I0507 19:01:32.663526    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:32.663631    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:32.663631    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:32.663631    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:32.667960    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:01:32.669387    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:32.669387    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:32.669387    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:32.669387    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:32.686544    7688 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
	I0507 19:01:33.149380    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:33.149571    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:33.149643    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:33.149643    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:33.155343    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 19:01:33.156733    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:33.156811    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:33.156811    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:33.156811    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:33.164746    7688 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0507 19:01:33.650773    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:33.650838    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:33.650838    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:33.650838    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:33.655719    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:01:33.657258    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:33.657258    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:33.657258    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:33.657258    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:33.661826    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:01:34.152388    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:34.152388    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:34.152388    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:34.152454    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:34.158941    7688 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0507 19:01:34.160389    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:34.160389    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:34.160389    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:34.160389    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:34.164101    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:01:34.651434    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:34.651434    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:34.651434    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:34.651434    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:34.655788    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:01:34.656635    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:34.656635    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:34.656635    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:34.656635    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:34.659892    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:01:34.660960    7688 pod_ready.go:102] pod "etcd-ha-210800-m02" in "kube-system" namespace has status "Ready":"False"
	I0507 19:01:35.154159    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:35.154159    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:35.154159    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:35.154159    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:35.159696    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 19:01:35.161058    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:35.161155    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:35.161241    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:35.161241    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:35.165835    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:01:35.654843    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:35.654843    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:35.654843    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:35.654843    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:35.659621    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:01:35.661271    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:35.661271    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:35.661271    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:35.661271    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:35.665574    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:01:36.157038    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:36.157038    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:36.157038    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:36.157038    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:36.160652    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:01:36.162573    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:36.162573    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:36.162573    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:36.162573    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:36.166164    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:01:36.658534    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:36.658534    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:36.658534    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:36.658534    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:36.662280    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:01:36.664689    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:36.664774    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:36.664774    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:36.664849    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:36.672612    7688 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0507 19:01:36.674180    7688 pod_ready.go:102] pod "etcd-ha-210800-m02" in "kube-system" namespace has status "Ready":"False"
	I0507 19:01:37.161570    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:37.161835    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:37.161835    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:37.161835    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:37.166422    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:01:37.168152    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:37.168310    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:37.168310    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:37.168310    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:37.173041    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:01:37.660635    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:37.660635    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:37.660635    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:37.660635    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:37.665221    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:01:37.667649    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:37.667649    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:37.667717    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:37.667717    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:37.673481    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 19:01:38.161829    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:38.161829    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:38.161829    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:38.161829    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:38.169050    7688 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0507 19:01:38.170130    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:38.170189    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:38.170189    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:38.170189    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:38.174871    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:01:38.657673    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:38.657750    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:38.657750    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:38.657750    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:38.663018    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 19:01:38.664381    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:38.664436    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:38.664436    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:38.664500    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:38.668775    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:01:39.156469    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:39.156469    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:39.156469    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:39.156469    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:39.161143    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:01:39.161876    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:39.161876    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:39.161876    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:39.161876    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:39.165438    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:01:39.166790    7688 pod_ready.go:102] pod "etcd-ha-210800-m02" in "kube-system" namespace has status "Ready":"False"
	I0507 19:01:39.654012    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:39.654012    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:39.654012    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:39.654012    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:39.659266    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 19:01:39.660477    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:39.660542    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:39.660542    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:39.660542    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:39.665354    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:01:40.156165    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 19:01:40.156244    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:40.156244    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:40.156244    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:40.161422    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 19:01:40.162878    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 19:01:40.162984    7688 round_trippers.go:469] Request Headers:
	I0507 19:01:40.162984    7688 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:01:40.162984    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:01:40.166371    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds

                                                
                                                
** /stderr **
ha_test.go:422: W0507 18:58:43.242452    7688 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0507 18:58:43.300495    7688 out.go:291] Setting OutFile to fd 636 ...
I0507 18:58:43.315061    7688 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0507 18:58:43.315061    7688 out.go:304] Setting ErrFile to fd 724...
I0507 18:58:43.315263    7688 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0507 18:58:43.328287    7688 mustload.go:65] Loading cluster: ha-210800
I0507 18:58:43.329358    7688 config.go:182] Loaded profile config "ha-210800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0507 18:58:43.329764    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
I0507 18:58:45.236007    7688 main.go:141] libmachine: [stdout =====>] : Off

                                                
                                                
I0507 18:58:45.236103    7688 main.go:141] libmachine: [stderr =====>] : 
W0507 18:58:45.236190    7688 host.go:58] "ha-210800-m02" host status: Stopped
I0507 18:58:45.239198    7688 out.go:177] * Starting "ha-210800-m02" control-plane node in "ha-210800" cluster
I0507 18:58:45.241405    7688 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
I0507 18:58:45.241405    7688 preload.go:147] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
I0507 18:58:45.241405    7688 cache.go:56] Caching tarball of preloaded images
I0507 18:58:45.242076    7688 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0507 18:58:45.242076    7688 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
I0507 18:58:45.242607    7688 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\config.json ...
I0507 18:58:45.243369    7688 start.go:360] acquireMachinesLock for ha-210800-m02: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
I0507 18:58:45.244308    7688 start.go:364] duration metric: took 938.6µs to acquireMachinesLock for "ha-210800-m02"
I0507 18:58:45.244308    7688 start.go:96] Skipping create...Using existing machine configuration
I0507 18:58:45.244308    7688 fix.go:54] fixHost starting: m02
I0507 18:58:45.244308    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
I0507 18:58:47.157733    7688 main.go:141] libmachine: [stdout =====>] : Off

                                                
                                                
I0507 18:58:47.157733    7688 main.go:141] libmachine: [stderr =====>] : 
I0507 18:58:47.157733    7688 fix.go:112] recreateIfNeeded on ha-210800-m02: state=Stopped err=<nil>
W0507 18:58:47.157820    7688 fix.go:138] unexpected machine state, will restart: <nil>
I0507 18:58:47.163622    7688 out.go:177] * Restarting existing hyperv VM for "ha-210800-m02" ...
I0507 18:58:47.166279    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-210800-m02
I0507 18:58:49.992929    7688 main.go:141] libmachine: [stdout =====>] : 
I0507 18:58:49.993422    7688 main.go:141] libmachine: [stderr =====>] : 
I0507 18:58:49.993422    7688 main.go:141] libmachine: Waiting for host to start...
I0507 18:58:49.993485    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
I0507 18:58:52.036231    7688 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0507 18:58:52.036295    7688 main.go:141] libmachine: [stderr =====>] : 
I0507 18:58:52.036366    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
I0507 18:58:54.325674    7688 main.go:141] libmachine: [stdout =====>] : 
I0507 18:58:54.325674    7688 main.go:141] libmachine: [stderr =====>] : 
I0507 18:58:55.334070    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
I0507 18:58:57.302999    7688 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0507 18:58:57.302999    7688 main.go:141] libmachine: [stderr =====>] : 
I0507 18:58:57.302999    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
I0507 18:58:59.556012    7688 main.go:141] libmachine: [stdout =====>] : 
I0507 18:58:59.556291    7688 main.go:141] libmachine: [stderr =====>] : 
I0507 18:59:00.561618    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
I0507 18:59:02.530615    7688 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0507 18:59:02.531164    7688 main.go:141] libmachine: [stderr =====>] : 
I0507 18:59:02.531164    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
I0507 18:59:04.754319    7688 main.go:141] libmachine: [stdout =====>] : 
I0507 18:59:04.754319    7688 main.go:141] libmachine: [stderr =====>] : 
I0507 18:59:05.765393    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
I0507 18:59:07.733688    7688 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0507 18:59:07.733688    7688 main.go:141] libmachine: [stderr =====>] : 
I0507 18:59:07.733688    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
I0507 18:59:09.973623    7688 main.go:141] libmachine: [stdout =====>] : 
I0507 18:59:09.974638    7688 main.go:141] libmachine: [stderr =====>] : 
I0507 18:59:10.985996    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
I0507 18:59:12.970306    7688 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0507 18:59:12.970306    7688 main.go:141] libmachine: [stderr =====>] : 
I0507 18:59:12.970306    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
I0507 18:59:15.299814    7688 main.go:141] libmachine: [stdout =====>] : 172.19.143.44

                                                
                                                
I0507 18:59:15.299814    7688 main.go:141] libmachine: [stderr =====>] : 
I0507 18:59:15.301720    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
I0507 18:59:17.222482    7688 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0507 18:59:17.222482    7688 main.go:141] libmachine: [stderr =====>] : 
I0507 18:59:17.223671    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
I0507 18:59:19.491643    7688 main.go:141] libmachine: [stdout =====>] : 172.19.143.44

                                                
                                                
I0507 18:59:19.491643    7688 main.go:141] libmachine: [stderr =====>] : 
I0507 18:59:19.492398    7688 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\config.json ...
I0507 18:59:19.494127    7688 machine.go:94] provisionDockerMachine start ...
I0507 18:59:19.494269    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
I0507 18:59:21.414192    7688 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0507 18:59:21.414192    7688 main.go:141] libmachine: [stderr =====>] : 
I0507 18:59:21.414192    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
I0507 18:59:23.695912    7688 main.go:141] libmachine: [stdout =====>] : 172.19.143.44

                                                
                                                
I0507 18:59:23.695912    7688 main.go:141] libmachine: [stderr =====>] : 
I0507 18:59:23.702210    7688 main.go:141] libmachine: Using SSH client type: native
I0507 18:59:23.702850    7688 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.143.44 22 <nil> <nil>}
I0507 18:59:23.702850    7688 main.go:141] libmachine: About to run SSH command:
hostname
I0507 18:59:23.829150    7688 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube

                                                
                                                
I0507 18:59:23.829715    7688 buildroot.go:166] provisioning hostname "ha-210800-m02"
I0507 18:59:23.829715    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
I0507 18:59:25.791932    7688 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0507 18:59:25.791932    7688 main.go:141] libmachine: [stderr =====>] : 
I0507 18:59:25.791932    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
I0507 18:59:28.050592    7688 main.go:141] libmachine: [stdout =====>] : 172.19.143.44

                                                
                                                
I0507 18:59:28.050592    7688 main.go:141] libmachine: [stderr =====>] : 
I0507 18:59:28.055456    7688 main.go:141] libmachine: Using SSH client type: native
I0507 18:59:28.055861    7688 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.143.44 22 <nil> <nil>}
I0507 18:59:28.055912    7688 main.go:141] libmachine: About to run SSH command:
sudo hostname ha-210800-m02 && echo "ha-210800-m02" | sudo tee /etc/hostname
I0507 18:59:28.218339    7688 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-210800-m02

                                                
                                                
I0507 18:59:28.218339    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
I0507 18:59:30.103756    7688 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0507 18:59:30.104552    7688 main.go:141] libmachine: [stderr =====>] : 
I0507 18:59:30.104627    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
I0507 18:59:32.371717    7688 main.go:141] libmachine: [stdout =====>] : 172.19.143.44

                                                
                                                
I0507 18:59:32.371855    7688 main.go:141] libmachine: [stderr =====>] : 
I0507 18:59:32.374711    7688 main.go:141] libmachine: Using SSH client type: native
I0507 18:59:32.375331    7688 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.143.44 22 <nil> <nil>}
I0507 18:59:32.375331    7688 main.go:141] libmachine: About to run SSH command:

                                                
                                                
		if ! grep -xq '.*\sha-210800-m02' /etc/hosts; then
			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-210800-m02/g' /etc/hosts;
			else 
				echo '127.0.1.1 ha-210800-m02' | sudo tee -a /etc/hosts; 
			fi
		fi
I0507 18:59:32.527454    7688 main.go:141] libmachine: SSH cmd err, output: <nil>: 
I0507 18:59:32.527565    7688 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
I0507 18:59:32.527663    7688 buildroot.go:174] setting up certificates
I0507 18:59:32.527663    7688 provision.go:84] configureAuth start
I0507 18:59:32.527754    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
I0507 18:59:34.427095    7688 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0507 18:59:34.427095    7688 main.go:141] libmachine: [stderr =====>] : 
I0507 18:59:34.427864    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
I0507 18:59:36.695660    7688 main.go:141] libmachine: [stdout =====>] : 172.19.143.44

                                                
                                                
I0507 18:59:36.695660    7688 main.go:141] libmachine: [stderr =====>] : 
I0507 18:59:36.696497    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
I0507 18:59:38.608011    7688 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0507 18:59:38.608535    7688 main.go:141] libmachine: [stderr =====>] : 
I0507 18:59:38.608637    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
I0507 18:59:40.919749    7688 main.go:141] libmachine: [stdout =====>] : 172.19.143.44

                                                
                                                
I0507 18:59:40.919749    7688 main.go:141] libmachine: [stderr =====>] : 
I0507 18:59:40.919808    7688 provision.go:143] copyHostCerts
I0507 18:59:40.919967    7688 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
I0507 18:59:40.920233    7688 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
I0507 18:59:40.920233    7688 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
I0507 18:59:40.920335    7688 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1082 bytes)
I0507 18:59:40.920924    7688 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
I0507 18:59:40.921562    7688 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
I0507 18:59:40.921562    7688 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
I0507 18:59:40.921901    7688 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
I0507 18:59:40.922835    7688 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
I0507 18:59:40.922835    7688 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
I0507 18:59:40.922835    7688 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
I0507 18:59:40.923359    7688 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
I0507 18:59:40.924057    7688 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-210800-m02 san=[127.0.0.1 172.19.143.44 ha-210800-m02 localhost minikube]
I0507 18:59:41.109396    7688 provision.go:177] copyRemoteCerts
I0507 18:59:41.117780    7688 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
I0507 18:59:41.117780    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
I0507 18:59:43.052040    7688 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0507 18:59:43.052040    7688 main.go:141] libmachine: [stderr =====>] : 
I0507 18:59:43.052121    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
I0507 18:59:45.390195    7688 main.go:141] libmachine: [stdout =====>] : 172.19.143.44

                                                
                                                
I0507 18:59:45.390195    7688 main.go:141] libmachine: [stderr =====>] : 
I0507 18:59:45.390195    7688 sshutil.go:53] new ssh client: &{IP:172.19.143.44 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800-m02\id_rsa Username:docker}
I0507 18:59:45.495889    7688 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.3778172s)
I0507 18:59:45.495889    7688 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
I0507 18:59:45.496889    7688 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
I0507 18:59:45.540421    7688 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
I0507 18:59:45.541048    7688 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
I0507 18:59:45.593154    7688 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
I0507 18:59:45.593154    7688 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
I0507 18:59:45.640799    7688 provision.go:87] duration metric: took 13.1122604s to configureAuth
I0507 18:59:45.640799    7688 buildroot.go:189] setting minikube options for container-runtime
I0507 18:59:45.641404    7688 config.go:182] Loaded profile config "ha-210800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0507 18:59:45.641404    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
I0507 18:59:47.535673    7688 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0507 18:59:47.535673    7688 main.go:141] libmachine: [stderr =====>] : 
I0507 18:59:47.536728    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
I0507 18:59:49.808622    7688 main.go:141] libmachine: [stdout =====>] : 172.19.143.44

                                                
                                                
I0507 18:59:49.809244    7688 main.go:141] libmachine: [stderr =====>] : 
I0507 18:59:49.815010    7688 main.go:141] libmachine: Using SSH client type: native
I0507 18:59:49.815010    7688 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.143.44 22 <nil> <nil>}
I0507 18:59:49.815010    7688 main.go:141] libmachine: About to run SSH command:
df --output=fstype / | tail -n 1
I0507 18:59:49.953893    7688 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs

                                                
                                                
I0507 18:59:49.953893    7688 buildroot.go:70] root file system type: tmpfs
I0507 18:59:49.953893    7688 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
I0507 18:59:49.953893    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
I0507 18:59:51.896710    7688 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0507 18:59:51.897619    7688 main.go:141] libmachine: [stderr =====>] : 
I0507 18:59:51.897713    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
I0507 18:59:54.170505    7688 main.go:141] libmachine: [stdout =====>] : 172.19.143.44

                                                
                                                
I0507 18:59:54.171346    7688 main.go:141] libmachine: [stderr =====>] : 
I0507 18:59:54.175218    7688 main.go:141] libmachine: Using SSH client type: native
I0507 18:59:54.175525    7688 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.143.44 22 <nil> <nil>}
I0507 18:59:54.175525    7688 main.go:141] libmachine: About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target  minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket 
StartLimitBurst=3
StartLimitIntervalSec=60

                                                
                                                
[Service]
Type=notify
Restart=on-failure

                                                
                                                

                                                
                                                

                                                
                                                
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

                                                
                                                
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP \$MAINPID

                                                
                                                
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

                                                
                                                
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

                                                
                                                
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

                                                
                                                
# kill only the docker process, not all processes in the cgroup
KillMode=process

                                                
                                                
[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service.new
I0507 18:59:54.338904    7688 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target  minikube-automount.service docker.socket
Requires= minikube-automount.service docker.socket 
StartLimitBurst=3
StartLimitIntervalSec=60

                                                
                                                
[Service]
Type=notify
Restart=on-failure

                                                
                                                

                                                
                                                

                                                
                                                
# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.

                                                
                                                
# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP $MAINPID

                                                
                                                
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

                                                
                                                
# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

                                                
                                                
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

                                                
                                                
# kill only the docker process, not all processes in the cgroup
KillMode=process

                                                
                                                
[Install]
WantedBy=multi-user.target

                                                
                                                
I0507 18:59:54.338904    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
I0507 18:59:56.296803    7688 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0507 18:59:56.296803    7688 main.go:141] libmachine: [stderr =====>] : 
I0507 18:59:56.296889    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
I0507 18:59:58.558613    7688 main.go:141] libmachine: [stdout =====>] : 172.19.143.44

                                                
                                                
I0507 18:59:58.558687    7688 main.go:141] libmachine: [stderr =====>] : 
I0507 18:59:58.562112    7688 main.go:141] libmachine: Using SSH client type: native
I0507 18:59:58.562112    7688 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.143.44 22 <nil> <nil>}
I0507 18:59:58.562112    7688 main.go:141] libmachine: About to run SSH command:
sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
I0507 19:00:00.968325    7688 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.

                                                
                                                
I0507 19:00:00.968390    7688 machine.go:97] duration metric: took 41.4714388s to provisionDockerMachine
I0507 19:00:00.968460    7688 start.go:293] postStartSetup for "ha-210800-m02" (driver="hyperv")
I0507 19:00:00.968460    7688 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0507 19:00:00.976963    7688 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0507 19:00:00.976963    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
I0507 19:00:02.959576    7688 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0507 19:00:02.959658    7688 main.go:141] libmachine: [stderr =====>] : 
I0507 19:00:02.959749    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
I0507 19:00:05.366834    7688 main.go:141] libmachine: [stdout =====>] : 172.19.143.44

                                                
                                                
I0507 19:00:05.366834    7688 main.go:141] libmachine: [stderr =====>] : 
I0507 19:00:05.366977    7688 sshutil.go:53] new ssh client: &{IP:172.19.143.44 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800-m02\id_rsa Username:docker}
I0507 19:00:05.481562    7688 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.5042426s)
I0507 19:00:05.490430    7688 ssh_runner.go:195] Run: cat /etc/os-release
I0507 19:00:05.497336    7688 info.go:137] Remote host: Buildroot 2023.02.9
I0507 19:00:05.497336    7688 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
I0507 19:00:05.497412    7688 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
I0507 19:00:05.498343    7688 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem -> 99922.pem in /etc/ssl/certs
I0507 19:00:05.498343    7688 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem -> /etc/ssl/certs/99922.pem
I0507 19:00:05.507188    7688 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
I0507 19:00:05.526307    7688 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem --> /etc/ssl/certs/99922.pem (1708 bytes)
I0507 19:00:05.570074    7688 start.go:296] duration metric: took 4.6012438s for postStartSetup
I0507 19:00:05.580400    7688 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
I0507 19:00:05.580400    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
I0507 19:00:07.569273    7688 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0507 19:00:07.569273    7688 main.go:141] libmachine: [stderr =====>] : 
I0507 19:00:07.569776    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
I0507 19:00:09.932111    7688 main.go:141] libmachine: [stdout =====>] : 172.19.143.44

                                                
                                                
I0507 19:00:09.932111    7688 main.go:141] libmachine: [stderr =====>] : 
I0507 19:00:09.932665    7688 sshutil.go:53] new ssh client: &{IP:172.19.143.44 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800-m02\id_rsa Username:docker}
I0507 19:00:10.038122    7688 ssh_runner.go:235] Completed: sudo ls --almost-all -1 /var/lib/minikube/backup: (4.4574263s)
I0507 19:00:10.038122    7688 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
I0507 19:00:10.047099    7688 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
I0507 19:00:10.120364    7688 fix.go:56] duration metric: took 1m24.8703865s for fixHost
I0507 19:00:10.120364    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
I0507 19:00:12.094101    7688 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0507 19:00:12.094101    7688 main.go:141] libmachine: [stderr =====>] : 
I0507 19:00:12.094309    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
I0507 19:00:14.456347    7688 main.go:141] libmachine: [stdout =====>] : 172.19.143.44

                                                
                                                
I0507 19:00:14.456347    7688 main.go:141] libmachine: [stderr =====>] : 
I0507 19:00:14.460611    7688 main.go:141] libmachine: Using SSH client type: native
I0507 19:00:14.460611    7688 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.143.44 22 <nil> <nil>}
I0507 19:00:14.460611    7688 main.go:141] libmachine: About to run SSH command:
date +%!s(MISSING).%!N(MISSING)
I0507 19:00:14.588550    7688 main.go:141] libmachine: SSH cmd err, output: <nil>: 1715108414.825055047

                                                
                                                
I0507 19:00:14.588550    7688 fix.go:216] guest clock: 1715108414.825055047
I0507 19:00:14.588550    7688 fix.go:229] Guest: 2024-05-07 19:00:14.825055047 +0000 UTC Remote: 2024-05-07 19:00:10.1203644 +0000 UTC m=+86.941139301 (delta=4.704690647s)
I0507 19:00:14.588550    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
I0507 19:00:16.560162    7688 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0507 19:00:16.560540    7688 main.go:141] libmachine: [stderr =====>] : 
I0507 19:00:16.560707    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
I0507 19:00:18.968607    7688 main.go:141] libmachine: [stdout =====>] : 172.19.143.44

                                                
                                                
I0507 19:00:18.968607    7688 main.go:141] libmachine: [stderr =====>] : 
I0507 19:00:18.973035    7688 main.go:141] libmachine: Using SSH client type: native
I0507 19:00:18.973468    7688 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.143.44 22 <nil> <nil>}
I0507 19:00:18.973468    7688 main.go:141] libmachine: About to run SSH command:
sudo date -s @1715108414
I0507 19:00:19.114443    7688 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue May  7 19:00:14 UTC 2024

                                                
                                                
I0507 19:00:19.114443    7688 fix.go:236] clock set: Tue May  7 19:00:14 UTC 2024
(err=<nil>)
I0507 19:00:19.114443    7688 start.go:83] releasing machines lock for "ha-210800-m02", held for 1m33.863868s
I0507 19:00:19.115738    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
I0507 19:00:21.049499    7688 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0507 19:00:21.049499    7688 main.go:141] libmachine: [stderr =====>] : 
I0507 19:00:21.049499    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
I0507 19:00:23.414606    7688 main.go:141] libmachine: [stdout =====>] : 172.19.143.44

                                                
                                                
I0507 19:00:23.414606    7688 main.go:141] libmachine: [stderr =====>] : 
I0507 19:00:23.418767    7688 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
I0507 19:00:23.418849    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
I0507 19:00:23.426440    7688 ssh_runner.go:195] Run: systemctl --version
I0507 19:00:23.426440    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
I0507 19:00:25.416717    7688 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0507 19:00:25.416807    7688 main.go:141] libmachine: [stderr =====>] : 
I0507 19:00:25.416955    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
I0507 19:00:25.431028    7688 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0507 19:00:25.431028    7688 main.go:141] libmachine: [stderr =====>] : 
I0507 19:00:25.431028    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
I0507 19:00:27.884011    7688 main.go:141] libmachine: [stdout =====>] : 172.19.143.44

                                                
                                                
I0507 19:00:27.884011    7688 main.go:141] libmachine: [stderr =====>] : 
I0507 19:00:27.885068    7688 sshutil.go:53] new ssh client: &{IP:172.19.143.44 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800-m02\id_rsa Username:docker}
I0507 19:00:27.903243    7688 main.go:141] libmachine: [stdout =====>] : 172.19.143.44

                                                
                                                
I0507 19:00:27.903243    7688 main.go:141] libmachine: [stderr =====>] : 
I0507 19:00:27.903243    7688 sshutil.go:53] new ssh client: &{IP:172.19.143.44 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800-m02\id_rsa Username:docker}
I0507 19:00:28.046256    7688 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.6271829s)
I0507 19:00:28.046256    7688 ssh_runner.go:235] Completed: systemctl --version: (4.6195098s)
I0507 19:00:28.056409    7688 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
W0507 19:00:28.065952    7688 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
I0507 19:00:28.074274    7688 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
I0507 19:00:28.102288    7688 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
I0507 19:00:28.102424    7688 start.go:494] detecting cgroup driver to use...
I0507 19:00:28.102578    7688 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
" | sudo tee /etc/crictl.yaml"
I0507 19:00:28.144374    7688 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
I0507 19:00:28.170640    7688 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
I0507 19:00:28.188589    7688 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
I0507 19:00:28.196731    7688 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
I0507 19:00:28.226332    7688 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0507 19:00:28.256182    7688 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
I0507 19:00:28.284352    7688 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
I0507 19:00:28.312094    7688 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
I0507 19:00:28.339813    7688 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
I0507 19:00:28.367523    7688 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
I0507 19:00:28.393699    7688 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
I0507 19:00:28.420346    7688 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
I0507 19:00:28.444928    7688 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
I0507 19:00:28.471952    7688 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0507 19:00:28.655503    7688 ssh_runner.go:195] Run: sudo systemctl restart containerd
I0507 19:00:28.694050    7688 start.go:494] detecting cgroup driver to use...
I0507 19:00:28.706188    7688 ssh_runner.go:195] Run: sudo systemctl cat docker.service
I0507 19:00:28.737968    7688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0507 19:00:28.766290    7688 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
I0507 19:00:28.799362    7688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
I0507 19:00:28.838359    7688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0507 19:00:28.867366    7688 ssh_runner.go:195] Run: sudo systemctl stop -f crio
I0507 19:00:28.922960    7688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
I0507 19:00:28.947506    7688 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0507 19:00:28.993543    7688 ssh_runner.go:195] Run: which cri-dockerd
I0507 19:00:29.007933    7688 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
I0507 19:00:29.024453    7688 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
I0507 19:00:29.063290    7688 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
I0507 19:00:29.248340    7688 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
I0507 19:00:29.427404    7688 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
I0507 19:00:29.427404    7688 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
I0507 19:00:29.469046    7688 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0507 19:00:29.660321    7688 ssh_runner.go:195] Run: sudo systemctl restart docker
I0507 19:00:32.267041    7688 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6064831s)
I0507 19:00:32.276353    7688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
I0507 19:00:32.305912    7688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0507 19:00:32.336283    7688 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
I0507 19:00:32.538037    7688 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
I0507 19:00:32.725134    7688 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0507 19:00:32.914772    7688 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
I0507 19:00:32.949268    7688 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
I0507 19:00:32.981577    7688 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0507 19:00:33.179060    7688 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
I0507 19:00:33.296393    7688 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0507 19:00:33.306277    7688 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
I0507 19:00:33.313831    7688 start.go:562] Will wait 60s for crictl version
I0507 19:00:33.324493    7688 ssh_runner.go:195] Run: which crictl
I0507 19:00:33.338515    7688 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
I0507 19:00:33.398553    7688 start.go:578] Version:  0.1.0
RuntimeName:  docker
RuntimeVersion:  26.0.2
RuntimeApiVersion:  v1
I0507 19:00:33.409588    7688 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0507 19:00:33.452123    7688 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
I0507 19:00:33.486861    7688 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
I0507 19:00:33.487055    7688 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
I0507 19:00:33.492796    7688 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
I0507 19:00:33.492796    7688 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
I0507 19:00:33.492796    7688 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
I0507 19:00:33.492796    7688 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:a3:a5:4f Flags:up|broadcast|multicast|running}
I0507 19:00:33.495599    7688 ip.go:210] interface addr: fe80::1edb:f5fd:c218:d8d2/64
I0507 19:00:33.495599    7688 ip.go:210] interface addr: 172.19.128.1/20
I0507 19:00:33.503644    7688 ssh_runner.go:195] Run: grep 172.19.128.1	host.minikube.internal$ /etc/hosts
I0507 19:00:33.510582    7688 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.128.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0507 19:00:33.530557    7688 mustload.go:65] Loading cluster: ha-210800
I0507 19:00:33.532229    7688 config.go:182] Loaded profile config "ha-210800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0507 19:00:33.532947    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800 ).state
I0507 19:00:35.510845    7688 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0507 19:00:35.510845    7688 main.go:141] libmachine: [stderr =====>] : 
I0507 19:00:35.510845    7688 host.go:66] Checking if "ha-210800" exists ...
I0507 19:00:35.511781    7688 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800 for IP: 172.19.143.44
I0507 19:00:35.511781    7688 certs.go:194] generating shared ca certs ...
I0507 19:00:35.511781    7688 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0507 19:00:35.512675    7688 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
I0507 19:00:35.513139    7688 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
I0507 19:00:35.513402    7688 certs.go:256] generating profile certs ...
I0507 19:00:35.513938    7688 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\client.key
I0507 19:00:35.514089    7688 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.key.7bc7bc9f
I0507 19:00:35.514184    7688 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.crt.7bc7bc9f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.19.132.69 172.19.143.44 172.19.137.224 172.19.143.254]
I0507 19:00:35.650650    7688 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.crt.7bc7bc9f ...
I0507 19:00:35.650650    7688 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.crt.7bc7bc9f: {Name:mkb3c429209752ce2d72d0e064f069647bcac036 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0507 19:00:35.652902    7688 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.key.7bc7bc9f ...
I0507 19:00:35.652992    7688 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.key.7bc7bc9f: {Name:mk8ecd6be39ee084948670d74e33e85d0cb8d730 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0507 19:00:35.654397    7688 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.crt.7bc7bc9f -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.crt
I0507 19:00:35.666042    7688 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.key.7bc7bc9f -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.key
I0507 19:00:35.666719    7688 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\proxy-client.key
I0507 19:00:35.666719    7688 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
I0507 19:00:35.667069    7688 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
I0507 19:00:35.667196    7688 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
I0507 19:00:35.667196    7688 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
I0507 19:00:35.667196    7688 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
I0507 19:00:35.667196    7688 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.key -> /var/lib/minikube/certs/apiserver.key
I0507 19:00:35.667840    7688 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
I0507 19:00:35.668116    7688 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
I0507 19:00:35.668266    7688 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\9992.pem (1338 bytes)
W0507 19:00:35.668638    7688 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\9992_empty.pem, impossibly tiny 0 bytes
I0507 19:00:35.668786    7688 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
I0507 19:00:35.668983    7688 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
I0507 19:00:35.668983    7688 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
I0507 19:00:35.668983    7688 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
I0507 19:00:35.668983    7688 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem (1708 bytes)
I0507 19:00:35.669725    7688 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\9992.pem -> /usr/share/ca-certificates/9992.pem
I0507 19:00:35.669850    7688 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem -> /usr/share/ca-certificates/99922.pem
I0507 19:00:35.669917    7688 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
I0507 19:00:35.670230    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800 ).state
I0507 19:00:37.635210    7688 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0507 19:00:37.635287    7688 main.go:141] libmachine: [stderr =====>] : 
I0507 19:00:37.635357    7688 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800 ).networkadapters[0]).ipaddresses[0]
I0507 19:00:39.938184    7688 main.go:141] libmachine: [stdout =====>] : 172.19.132.69

                                                
                                                
I0507 19:00:39.938184    7688 main.go:141] libmachine: [stderr =====>] : 
I0507 19:00:39.938184    7688 sshutil.go:53] new ssh client: &{IP:172.19.132.69 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800\id_rsa Username:docker}
I0507 19:00:40.043940    7688 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
I0507 19:00:40.047831    7688 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
I0507 19:00:40.080544    7688 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
I0507 19:00:40.087424    7688 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
I0507 19:00:40.113535    7688 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
I0507 19:00:40.120566    7688 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
I0507 19:00:40.148773    7688 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
I0507 19:00:40.154633    7688 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
I0507 19:00:40.181350    7688 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
I0507 19:00:40.187497    7688 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
I0507 19:00:40.216758    7688 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
I0507 19:00:40.223451    7688 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
I0507 19:00:40.242486    7688 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
I0507 19:00:40.293086    7688 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
I0507 19:00:40.339256    7688 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
I0507 19:00:40.383305    7688 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
I0507 19:00:40.429567    7688 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
I0507 19:00:40.476667    7688 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
I0507 19:00:40.520546    7688 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
I0507 19:00:40.565107    7688 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
I0507 19:00:40.608228    7688 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\9992.pem --> /usr/share/ca-certificates/9992.pem (1338 bytes)
I0507 19:00:40.650385    7688 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem --> /usr/share/ca-certificates/99922.pem (1708 bytes)
I0507 19:00:40.693634    7688 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
I0507 19:00:40.737100    7688 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
I0507 19:00:40.767114    7688 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
I0507 19:00:40.797983    7688 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
I0507 19:00:40.829467    7688 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
I0507 19:00:40.860808    7688 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
I0507 19:00:40.894001    7688 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
I0507 19:00:40.924692    7688 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
I0507 19:00:40.966330    7688 ssh_runner.go:195] Run: openssl version
I0507 19:00:40.983729    7688 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9992.pem && ln -fs /usr/share/ca-certificates/9992.pem /etc/ssl/certs/9992.pem"
I0507 19:00:41.012028    7688 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9992.pem
I0507 19:00:41.018888    7688 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  7 18:15 /usr/share/ca-certificates/9992.pem
I0507 19:00:41.026880    7688 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9992.pem
I0507 19:00:41.043569    7688 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9992.pem /etc/ssl/certs/51391683.0"
I0507 19:00:41.072855    7688 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/99922.pem && ln -fs /usr/share/ca-certificates/99922.pem /etc/ssl/certs/99922.pem"
I0507 19:00:41.100178    7688 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/99922.pem
I0507 19:00:41.107139    7688 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  7 18:15 /usr/share/ca-certificates/99922.pem
I0507 19:00:41.115171    7688 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/99922.pem
I0507 19:00:41.130848    7688 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/99922.pem /etc/ssl/certs/3ec20f2e.0"
I0507 19:00:41.158509    7688 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
I0507 19:00:41.185638    7688 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
I0507 19:00:41.193161    7688 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  7 18:01 /usr/share/ca-certificates/minikubeCA.pem
I0507 19:00:41.201555    7688 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
I0507 19:00:41.218078    7688 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
I0507 19:00:41.249926    7688 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
I0507 19:00:41.266367    7688 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
I0507 19:00:41.284578    7688 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
I0507 19:00:41.301642    7688 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
I0507 19:00:41.319545    7688 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
I0507 19:00:41.337629    7688 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
I0507 19:00:41.355354    7688 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
I0507 19:00:41.365203    7688 kubeadm.go:928] updating node {m02 172.19.143.44 8443 v1.30.0 docker true true} ...
I0507 19:00:41.365512    7688 kubeadm.go:940] kubelet [Unit]
Wants=docker.socket

                                                
                                                
[Service]
ExecStart=
ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-210800-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.143.44

                                                
                                                
[Install]
config:
{KubernetesVersion:v1.30.0 ClusterName:ha-210800 Namespace:default APIServerHAVIP:172.19.143.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
I0507 19:00:41.365512    7688 kube-vip.go:111] generating kube-vip config ...
I0507 19:00:41.373697    7688 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
I0507 19:00:41.399863    7688 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
I0507 19:00:41.399863    7688 kube-vip.go:133] kube-vip config:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
name: kube-vip
namespace: kube-system
spec:
containers:
- args:
- manager
env:
- name: vip_arp
value: "true"
- name: port
value: "8443"
- name: vip_interface
value: eth0
- name: vip_cidr
value: "32"
- name: dns_mode
value: first
- name: cp_enable
value: "true"
- name: cp_namespace
value: kube-system
- name: vip_leaderelection
value: "true"
- name: vip_leasename
value: plndr-cp-lock
- name: vip_leaseduration
value: "5"
- name: vip_renewdeadline
value: "3"
- name: vip_retryperiod
value: "1"
- name: address
value: 172.19.143.254
- name: prometheus_server
value: :2112
- name : lb_enable
value: "true"
- name: lb_port
value: "8443"
image: ghcr.io/kube-vip/kube-vip:v0.7.1
imagePullPolicy: IfNotPresent
name: kube-vip
resources: {}
securityContext:
capabilities:
add:
- NET_ADMIN
- NET_RAW
volumeMounts:
- mountPath: /etc/kubernetes/admin.conf
name: kubeconfig
hostAliases:
- hostnames:
- kubernetes
ip: 127.0.0.1
hostNetwork: true
volumes:
- hostPath:
path: "/etc/kubernetes/admin.conf"
name: kubeconfig
status: {}
I0507 19:00:41.407688    7688 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
I0507 19:00:41.429644    7688 binaries.go:44] Found k8s binaries, skipping transfer
I0507 19:00:41.439263    7688 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
I0507 19:00:41.458801    7688 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
I0507 19:00:41.489422    7688 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
I0507 19:00:41.520196    7688 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
I0507 19:00:41.560570    7688 ssh_runner.go:195] Run: grep 172.19.143.254	control-plane.minikube.internal$ /etc/hosts
I0507 19:00:41.566779    7688 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.143.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
I0507 19:00:41.597764    7688 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0507 19:00:41.782797    7688 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0507 19:00:41.814462    7688 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.19.143.44 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
I0507 19:00:41.819335    7688 out.go:177] * Verifying Kubernetes components...
I0507 19:00:41.814462    7688 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
I0507 19:00:41.815093    7688 config.go:182] Loaded profile config "ha-210800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0507 19:00:41.825774    7688 out.go:177] * Enabled addons: 
I0507 19:00:41.827920    7688 addons.go:505] duration metric: took 13.4576ms for enable addons: enabled=[]
I0507 19:00:41.835189    7688 ssh_runner.go:195] Run: sudo systemctl daemon-reload
I0507 19:00:42.037273    7688 ssh_runner.go:195] Run: sudo systemctl start kubelet
I0507 19:00:42.071406    7688 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
I0507 19:00:42.072189    7688 kapi.go:59] client config for ha-210800: &rest.Config{Host:"https://172.19.143.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-210800\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-210800\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2655b00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
W0507 19:00:42.072299    7688 kubeadm.go:477] Overriding stale ClientConfig host https://172.19.143.254:8443 with https://172.19.132.69:8443
I0507 19:00:42.073479    7688 cert_rotation.go:137] Starting client certificate rotation controller
I0507 19:00:42.073479    7688 node_ready.go:35] waiting up to 6m0s for node "ha-210800-m02" to be "Ready" ...
I0507 19:00:42.074213    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:00:42.074302    7688 round_trippers.go:469] Request Headers:
I0507 19:00:42.074302    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:42.074302    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:42.091599    7688 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
I0507 19:00:42.588746    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:00:42.588864    7688 round_trippers.go:469] Request Headers:
I0507 19:00:42.588864    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:42.588864    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:42.592713    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0507 19:00:42.593669    7688 node_ready.go:49] node "ha-210800-m02" has status "Ready":"True"
I0507 19:00:42.593669    7688 node_ready.go:38] duration metric: took 519.6084ms for node "ha-210800-m02" to be "Ready" ...
I0507 19:00:42.593669    7688 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
I0507 19:00:42.593862    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods
I0507 19:00:42.593897    7688 round_trippers.go:469] Request Headers:
I0507 19:00:42.593897    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:42.593920    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:42.602094    7688 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
I0507 19:00:42.613674    7688 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-cr9nn" in "kube-system" namespace to be "Ready" ...
I0507 19:00:42.613674    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-cr9nn
I0507 19:00:42.613674    7688 round_trippers.go:469] Request Headers:
I0507 19:00:42.613674    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:42.613674    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:42.617717    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:00:42.618794    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800
I0507 19:00:42.618865    7688 round_trippers.go:469] Request Headers:
I0507 19:00:42.618865    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:42.618865    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:42.622096    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0507 19:00:42.623273    7688 pod_ready.go:92] pod "coredns-7db6d8ff4d-cr9nn" in "kube-system" namespace has status "Ready":"True"
I0507 19:00:42.623273    7688 pod_ready.go:81] duration metric: took 9.5986ms for pod "coredns-7db6d8ff4d-cr9nn" in "kube-system" namespace to be "Ready" ...
I0507 19:00:42.623273    7688 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-dxsqf" in "kube-system" namespace to be "Ready" ...
I0507 19:00:42.623273    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-dxsqf
I0507 19:00:42.623273    7688 round_trippers.go:469] Request Headers:
I0507 19:00:42.623273    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:42.623273    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:42.627607    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:00:42.628378    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800
I0507 19:00:42.628409    7688 round_trippers.go:469] Request Headers:
I0507 19:00:42.628409    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:42.628449    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:42.632093    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0507 19:00:42.633114    7688 pod_ready.go:92] pod "coredns-7db6d8ff4d-dxsqf" in "kube-system" namespace has status "Ready":"True"
I0507 19:00:42.633114    7688 pod_ready.go:81] duration metric: took 9.8403ms for pod "coredns-7db6d8ff4d-dxsqf" in "kube-system" namespace to be "Ready" ...
I0507 19:00:42.633114    7688 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-210800" in "kube-system" namespace to be "Ready" ...
I0507 19:00:42.633200    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800
I0507 19:00:42.633273    7688 round_trippers.go:469] Request Headers:
I0507 19:00:42.633273    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:42.633273    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:42.639801    7688 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0507 19:00:42.639801    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800
I0507 19:00:42.639801    7688 round_trippers.go:469] Request Headers:
I0507 19:00:42.639801    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:42.639801    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:42.644122    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:00:42.644820    7688 pod_ready.go:92] pod "etcd-ha-210800" in "kube-system" namespace has status "Ready":"True"
I0507 19:00:42.644820    7688 pod_ready.go:81] duration metric: took 11.6192ms for pod "etcd-ha-210800" in "kube-system" namespace to be "Ready" ...
I0507 19:00:42.644820    7688 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-210800-m02" in "kube-system" namespace to be "Ready" ...
I0507 19:00:42.645389    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:00:42.645389    7688 round_trippers.go:469] Request Headers:
I0507 19:00:42.645389    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:42.645389    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:42.648949    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0507 19:00:42.648949    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:00:42.648949    7688 round_trippers.go:469] Request Headers:
I0507 19:00:42.648949    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:42.648949    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:42.654031    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0507 19:00:43.152486    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:00:43.152486    7688 round_trippers.go:469] Request Headers:
I0507 19:00:43.152486    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:43.152486    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:43.156820    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:00:43.157826    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:00:43.157826    7688 round_trippers.go:469] Request Headers:
I0507 19:00:43.157826    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:43.157826    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:43.162957    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0507 19:00:43.645604    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:00:43.645604    7688 round_trippers.go:469] Request Headers:
I0507 19:00:43.645604    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:43.645682    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:43.654577    7688 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
I0507 19:00:43.656251    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:00:43.656283    7688 round_trippers.go:469] Request Headers:
I0507 19:00:43.656283    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:43.656305    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:43.704534    7688 round_trippers.go:574] Response Status: 200 OK in 48 milliseconds
I0507 19:00:44.151753    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:00:44.151753    7688 round_trippers.go:469] Request Headers:
I0507 19:00:44.151753    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:44.151753    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:44.156329    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:00:44.157443    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:00:44.157443    7688 round_trippers.go:469] Request Headers:
I0507 19:00:44.157443    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:44.157443    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:44.162664    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0507 19:00:44.656034    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:00:44.656034    7688 round_trippers.go:469] Request Headers:
I0507 19:00:44.656034    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:44.656110    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:44.660224    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:00:44.662129    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:00:44.662189    7688 round_trippers.go:469] Request Headers:
I0507 19:00:44.662189    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:44.662189    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:44.666268    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:00:44.667628    7688 pod_ready.go:102] pod "etcd-ha-210800-m02" in "kube-system" namespace has status "Ready":"False"
I0507 19:00:45.147281    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:00:45.147499    7688 round_trippers.go:469] Request Headers:
I0507 19:00:45.147499    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:45.147499    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:45.152129    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:00:45.153080    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:00:45.153080    7688 round_trippers.go:469] Request Headers:
I0507 19:00:45.153080    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:45.153080    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:45.157094    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:00:45.651735    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:00:45.651792    7688 round_trippers.go:469] Request Headers:
I0507 19:00:45.651792    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:45.651792    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:45.674125    7688 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
I0507 19:00:45.675034    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:00:45.675034    7688 round_trippers.go:469] Request Headers:
I0507 19:00:45.675034    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:45.675109    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:45.682386    7688 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
I0507 19:00:46.156297    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:00:46.156297    7688 round_trippers.go:469] Request Headers:
I0507 19:00:46.156297    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:46.156297    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:46.161386    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0507 19:00:46.162371    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:00:46.162435    7688 round_trippers.go:469] Request Headers:
I0507 19:00:46.162435    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:46.162435    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:46.165742    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0507 19:00:46.648635    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:00:46.648635    7688 round_trippers.go:469] Request Headers:
I0507 19:00:46.648635    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:46.648635    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:46.656191    7688 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
I0507 19:00:46.657987    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:00:46.657987    7688 round_trippers.go:469] Request Headers:
I0507 19:00:46.657987    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:46.657987    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:46.663429    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0507 19:00:47.158042    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:00:47.158042    7688 round_trippers.go:469] Request Headers:
I0507 19:00:47.158042    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:47.158042    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:47.162611    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:00:47.163374    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:00:47.163374    7688 round_trippers.go:469] Request Headers:
I0507 19:00:47.163374    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:47.163374    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:47.166941    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0507 19:00:47.168541    7688 pod_ready.go:102] pod "etcd-ha-210800-m02" in "kube-system" namespace has status "Ready":"False"
I0507 19:00:47.660291    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:00:47.660291    7688 round_trippers.go:469] Request Headers:
I0507 19:00:47.660397    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:47.660397    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:47.664577    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:00:47.666314    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:00:47.666314    7688 round_trippers.go:469] Request Headers:
I0507 19:00:47.666314    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:47.666314    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:47.670630    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:00:48.160120    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:00:48.160368    7688 round_trippers.go:469] Request Headers:
I0507 19:00:48.160368    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:48.160368    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:48.173207    7688 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
I0507 19:00:48.175439    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:00:48.175439    7688 round_trippers.go:469] Request Headers:
I0507 19:00:48.175533    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:48.175533    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:48.184810    7688 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
I0507 19:00:48.655465    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:00:48.655465    7688 round_trippers.go:469] Request Headers:
I0507 19:00:48.655465    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:48.655465    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:48.660489    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0507 19:00:48.661953    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:00:48.661953    7688 round_trippers.go:469] Request Headers:
I0507 19:00:48.661953    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:48.661953    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:48.668762    7688 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0507 19:00:49.157673    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:00:49.157751    7688 round_trippers.go:469] Request Headers:
I0507 19:00:49.157751    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:49.157751    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:49.162074    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:00:49.167505    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:00:49.167505    7688 round_trippers.go:469] Request Headers:
I0507 19:00:49.167505    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:49.167596    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:49.172179    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:00:49.173160    7688 pod_ready.go:102] pod "etcd-ha-210800-m02" in "kube-system" namespace has status "Ready":"False"
I0507 19:00:49.657849    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:00:49.658083    7688 round_trippers.go:469] Request Headers:
I0507 19:00:49.658083    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:49.658083    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:49.663438    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0507 19:00:49.664726    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:00:49.664726    7688 round_trippers.go:469] Request Headers:
I0507 19:00:49.664726    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:49.664726    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:49.669651    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:00:50.153960    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:00:50.154160    7688 round_trippers.go:469] Request Headers:
I0507 19:00:50.154160    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:50.154160    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:50.159949    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0507 19:00:50.161897    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:00:50.161897    7688 round_trippers.go:469] Request Headers:
I0507 19:00:50.161897    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:50.161897    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:50.167349    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0507 19:00:50.655470    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:00:50.655573    7688 round_trippers.go:469] Request Headers:
I0507 19:00:50.655573    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:50.655573    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:50.660354    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:00:50.661391    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:00:50.661933    7688 round_trippers.go:469] Request Headers:
I0507 19:00:50.662036    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:50.662036    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:50.667759    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0507 19:00:51.152865    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:00:51.152936    7688 round_trippers.go:469] Request Headers:
I0507 19:00:51.153006    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:51.153006    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:51.159427    7688 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0507 19:00:51.160382    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:00:51.161271    7688 round_trippers.go:469] Request Headers:
I0507 19:00:51.161309    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:51.161309    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:51.165122    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0507 19:00:51.652511    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:00:51.652579    7688 round_trippers.go:469] Request Headers:
I0507 19:00:51.652579    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:51.652648    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:51.661350    7688 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
I0507 19:00:51.662514    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:00:51.662556    7688 round_trippers.go:469] Request Headers:
I0507 19:00:51.662556    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:51.662586    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:51.666739    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:00:51.667878    7688 pod_ready.go:102] pod "etcd-ha-210800-m02" in "kube-system" namespace has status "Ready":"False"
I0507 19:00:52.153944    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:00:52.153944    7688 round_trippers.go:469] Request Headers:
I0507 19:00:52.153944    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:52.153944    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:52.162689    7688 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
I0507 19:00:52.164386    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:00:52.164386    7688 round_trippers.go:469] Request Headers:
I0507 19:00:52.164386    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:52.164523    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:52.167686    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0507 19:00:52.657393    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:00:52.657485    7688 round_trippers.go:469] Request Headers:
I0507 19:00:52.657485    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:52.657485    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:52.662638    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0507 19:00:52.664303    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:00:52.664303    7688 round_trippers.go:469] Request Headers:
I0507 19:00:52.664402    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:52.664402    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:52.669852    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0507 19:00:53.156221    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:00:53.156221    7688 round_trippers.go:469] Request Headers:
I0507 19:00:53.156221    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:53.156221    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:53.160502    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:00:53.162077    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:00:53.162077    7688 round_trippers.go:469] Request Headers:
I0507 19:00:53.162077    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:53.162077    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:53.166347    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:00:53.646814    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:00:53.646876    7688 round_trippers.go:469] Request Headers:
I0507 19:00:53.646876    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:53.646876    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:53.652287    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:00:53.653276    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:00:53.653276    7688 round_trippers.go:469] Request Headers:
I0507 19:00:53.653276    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:53.653341    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:53.656359    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0507 19:00:54.149496    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:00:54.149496    7688 round_trippers.go:469] Request Headers:
I0507 19:00:54.149496    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:54.149496    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:54.154697    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:00:54.155488    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:00:54.155488    7688 round_trippers.go:469] Request Headers:
I0507 19:00:54.155488    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:54.155488    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:54.161110    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0507 19:00:54.162281    7688 pod_ready.go:102] pod "etcd-ha-210800-m02" in "kube-system" namespace has status "Ready":"False"
I0507 19:00:54.648278    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:00:54.648398    7688 round_trippers.go:469] Request Headers:
I0507 19:00:54.648398    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:54.648398    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:54.653832    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0507 19:00:54.655450    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:00:54.655450    7688 round_trippers.go:469] Request Headers:
I0507 19:00:54.655450    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:54.655450    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:54.663366    7688 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
I0507 19:00:55.150842    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:00:55.150842    7688 round_trippers.go:469] Request Headers:
I0507 19:00:55.150842    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:55.150842    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:55.157398    7688 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0507 19:00:55.159107    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:00:55.159107    7688 round_trippers.go:469] Request Headers:
I0507 19:00:55.159237    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:55.159237    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:55.164012    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:00:55.652905    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:00:55.652990    7688 round_trippers.go:469] Request Headers:
I0507 19:00:55.652990    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:55.652990    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:55.659306    7688 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0507 19:00:55.659910    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:00:55.659910    7688 round_trippers.go:469] Request Headers:
I0507 19:00:55.659910    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:55.659910    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:55.663480    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0507 19:00:56.153600    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:00:56.153600    7688 round_trippers.go:469] Request Headers:
I0507 19:00:56.153600    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:56.153600    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:56.161470    7688 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
I0507 19:00:56.163815    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:00:56.163873    7688 round_trippers.go:469] Request Headers:
I0507 19:00:56.163923    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:56.163983    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:56.168252    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:00:56.170549    7688 pod_ready.go:102] pod "etcd-ha-210800-m02" in "kube-system" namespace has status "Ready":"False"
I0507 19:00:56.655232    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:00:56.655516    7688 round_trippers.go:469] Request Headers:
I0507 19:00:56.655516    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:56.655516    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:56.662935    7688 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
I0507 19:00:56.665105    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:00:56.665216    7688 round_trippers.go:469] Request Headers:
I0507 19:00:56.665216    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:56.665216    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:56.670462    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0507 19:00:57.159468    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:00:57.159781    7688 round_trippers.go:469] Request Headers:
I0507 19:00:57.159781    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:57.159781    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:57.165092    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0507 19:00:57.166268    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:00:57.166268    7688 round_trippers.go:469] Request Headers:
I0507 19:00:57.166268    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:57.166268    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:57.171114    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:00:57.656653    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:00:57.656653    7688 round_trippers.go:469] Request Headers:
I0507 19:00:57.656653    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:57.656653    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:57.660962    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:00:57.662545    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:00:57.662616    7688 round_trippers.go:469] Request Headers:
I0507 19:00:57.662616    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:57.662616    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:57.666942    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:00:58.159723    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:00:58.159723    7688 round_trippers.go:469] Request Headers:
I0507 19:00:58.159723    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:58.159723    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:58.168917    7688 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
I0507 19:00:58.170511    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:00:58.170623    7688 round_trippers.go:469] Request Headers:
I0507 19:00:58.170623    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:58.170697    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:58.174672    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0507 19:00:58.175888    7688 pod_ready.go:102] pod "etcd-ha-210800-m02" in "kube-system" namespace has status "Ready":"False"
I0507 19:00:58.660937    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:00:58.660937    7688 round_trippers.go:469] Request Headers:
I0507 19:00:58.660937    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:58.660937    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:58.664703    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0507 19:00:58.667098    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:00:58.667172    7688 round_trippers.go:469] Request Headers:
I0507 19:00:58.667172    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:58.667172    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:58.671627    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:00:59.148393    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:00:59.148393    7688 round_trippers.go:469] Request Headers:
I0507 19:00:59.148472    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:59.148472    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:59.157571    7688 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
I0507 19:00:59.158745    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:00:59.158745    7688 round_trippers.go:469] Request Headers:
I0507 19:00:59.158745    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:59.158745    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:59.162444    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0507 19:00:59.647174    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:00:59.647174    7688 round_trippers.go:469] Request Headers:
I0507 19:00:59.647440    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:59.647440    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:59.652292    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:00:59.653392    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:00:59.653486    7688 round_trippers.go:469] Request Headers:
I0507 19:00:59.653486    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:00:59.653486    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:00:59.658258    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:01:00.161349    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:00.161427    7688 round_trippers.go:469] Request Headers:
I0507 19:01:00.161427    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:00.161427    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:00.169401    7688 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
I0507 19:01:00.170910    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:00.170968    7688 round_trippers.go:469] Request Headers:
I0507 19:01:00.171025    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:00.171025    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:00.175692    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:01:00.176303    7688 pod_ready.go:102] pod "etcd-ha-210800-m02" in "kube-system" namespace has status "Ready":"False"
I0507 19:01:00.655677    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:00.655677    7688 round_trippers.go:469] Request Headers:
I0507 19:01:00.655677    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:00.655677    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:00.660434    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:01:00.662140    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:00.662140    7688 round_trippers.go:469] Request Headers:
I0507 19:01:00.662140    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:00.662274    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:00.666620    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:01:01.155627    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:01.155627    7688 round_trippers.go:469] Request Headers:
I0507 19:01:01.155710    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:01.155710    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:01.162027    7688 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0507 19:01:01.162925    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:01.162925    7688 round_trippers.go:469] Request Headers:
I0507 19:01:01.162925    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:01.162925    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:01.167044    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:01:01.656881    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:01.656881    7688 round_trippers.go:469] Request Headers:
I0507 19:01:01.656881    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:01.656881    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:01.664821    7688 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
I0507 19:01:01.665828    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:01.665828    7688 round_trippers.go:469] Request Headers:
I0507 19:01:01.665828    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:01.665828    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:01.669925    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:01:02.159036    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:02.159177    7688 round_trippers.go:469] Request Headers:
I0507 19:01:02.159177    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:02.159177    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:02.163500    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:01:02.164794    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:02.164794    7688 round_trippers.go:469] Request Headers:
I0507 19:01:02.164794    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:02.164858    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:02.168281    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0507 19:01:02.648133    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:02.648215    7688 round_trippers.go:469] Request Headers:
I0507 19:01:02.648215    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:02.648215    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:02.653766    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0507 19:01:02.654639    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:02.654639    7688 round_trippers.go:469] Request Headers:
I0507 19:01:02.654639    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:02.654639    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:02.658932    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:01:02.659680    7688 pod_ready.go:102] pod "etcd-ha-210800-m02" in "kube-system" namespace has status "Ready":"False"
I0507 19:01:03.148669    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:03.148669    7688 round_trippers.go:469] Request Headers:
I0507 19:01:03.148797    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:03.148797    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:03.153867    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0507 19:01:03.154835    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:03.154835    7688 round_trippers.go:469] Request Headers:
I0507 19:01:03.154835    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:03.154835    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:03.161882    7688 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
I0507 19:01:03.646696    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:03.646696    7688 round_trippers.go:469] Request Headers:
I0507 19:01:03.646696    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:03.646696    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:03.654549    7688 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
I0507 19:01:03.655603    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:03.655603    7688 round_trippers.go:469] Request Headers:
I0507 19:01:03.655603    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:03.655603    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:03.659158    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0507 19:01:04.147491    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:04.147641    7688 round_trippers.go:469] Request Headers:
I0507 19:01:04.147718    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:04.147718    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:04.153027    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0507 19:01:04.154733    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:04.154790    7688 round_trippers.go:469] Request Headers:
I0507 19:01:04.154790    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:04.154790    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:04.158557    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0507 19:01:04.648308    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:04.648308    7688 round_trippers.go:469] Request Headers:
I0507 19:01:04.648308    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:04.648308    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:04.652447    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:01:04.654460    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:04.654460    7688 round_trippers.go:469] Request Headers:
I0507 19:01:04.654460    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:04.654460    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:04.657673    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0507 19:01:05.152400    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:05.152522    7688 round_trippers.go:469] Request Headers:
I0507 19:01:05.152522    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:05.152522    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:05.157135    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0507 19:01:05.157736    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:05.157736    7688 round_trippers.go:469] Request Headers:
I0507 19:01:05.157736    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:05.157736    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:05.161307    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0507 19:01:05.162004    7688 pod_ready.go:102] pod "etcd-ha-210800-m02" in "kube-system" namespace has status "Ready":"False"
I0507 19:01:05.657082    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:05.657082    7688 round_trippers.go:469] Request Headers:
I0507 19:01:05.657082    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:05.657082    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:05.662648    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0507 19:01:05.663519    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:05.663519    7688 round_trippers.go:469] Request Headers:
I0507 19:01:05.663519    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:05.663519    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:05.667174    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0507 19:01:06.155758    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:06.155826    7688 round_trippers.go:469] Request Headers:
I0507 19:01:06.155826    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:06.155826    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:06.160409    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:01:06.162298    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:06.162387    7688 round_trippers.go:469] Request Headers:
I0507 19:01:06.162425    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:06.162450    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:06.165729    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0507 19:01:06.658525    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:06.658525    7688 round_trippers.go:469] Request Headers:
I0507 19:01:06.658525    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:06.658525    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:06.662138    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0507 19:01:06.663649    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:06.663736    7688 round_trippers.go:469] Request Headers:
I0507 19:01:06.663736    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:06.663736    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:06.668446    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:01:07.157115    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:07.157115    7688 round_trippers.go:469] Request Headers:
I0507 19:01:07.157115    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:07.157115    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:07.161716    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:01:07.162923    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:07.162923    7688 round_trippers.go:469] Request Headers:
I0507 19:01:07.162923    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:07.162923    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:07.167104    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:01:07.168472    7688 pod_ready.go:102] pod "etcd-ha-210800-m02" in "kube-system" namespace has status "Ready":"False"
I0507 19:01:07.662030    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:07.662030    7688 round_trippers.go:469] Request Headers:
I0507 19:01:07.662030    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:07.662030    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:07.666685    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:01:07.668854    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:07.668854    7688 round_trippers.go:469] Request Headers:
I0507 19:01:07.668854    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:07.668854    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:07.673429    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:01:08.147487    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:08.147487    7688 round_trippers.go:469] Request Headers:
I0507 19:01:08.147735    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:08.147735    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:08.151504    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0507 19:01:08.153179    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:08.153179    7688 round_trippers.go:469] Request Headers:
I0507 19:01:08.153179    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:08.153179    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:08.157079    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0507 19:01:08.660864    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:08.660864    7688 round_trippers.go:469] Request Headers:
I0507 19:01:08.660864    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:08.660864    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:08.665481    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:01:08.666997    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:08.666997    7688 round_trippers.go:469] Request Headers:
I0507 19:01:08.667054    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:08.667054    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:08.675059    7688 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
I0507 19:01:09.161310    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:09.161310    7688 round_trippers.go:469] Request Headers:
I0507 19:01:09.161310    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:09.161310    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:09.166902    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0507 19:01:09.168013    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:09.168083    7688 round_trippers.go:469] Request Headers:
I0507 19:01:09.168083    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:09.168083    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:09.172754    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:01:09.173369    7688 pod_ready.go:102] pod "etcd-ha-210800-m02" in "kube-system" namespace has status "Ready":"False"
I0507 19:01:09.659489    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:09.659489    7688 round_trippers.go:469] Request Headers:
I0507 19:01:09.659489    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:09.659489    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:09.664146    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:01:09.666221    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:09.666221    7688 round_trippers.go:469] Request Headers:
I0507 19:01:09.666315    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:09.666315    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:09.670552    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:01:10.159020    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:10.159020    7688 round_trippers.go:469] Request Headers:
I0507 19:01:10.159020    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:10.159020    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:10.163490    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:01:10.165245    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:10.165245    7688 round_trippers.go:469] Request Headers:
I0507 19:01:10.165245    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:10.165351    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:10.168667    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0507 19:01:10.661756    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:10.661756    7688 round_trippers.go:469] Request Headers:
I0507 19:01:10.661756    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:10.661756    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:10.666706    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:01:10.669057    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:10.669145    7688 round_trippers.go:469] Request Headers:
I0507 19:01:10.669145    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:10.669145    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:10.674796    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0507 19:01:11.160384    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:11.160384    7688 round_trippers.go:469] Request Headers:
I0507 19:01:11.160384    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:11.160499    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:11.165398    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:01:11.167401    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:11.167401    7688 round_trippers.go:469] Request Headers:
I0507 19:01:11.167401    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:11.167401    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:11.171683    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:01:11.661216    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:11.661480    7688 round_trippers.go:469] Request Headers:
I0507 19:01:11.661480    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:11.661480    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:11.665237    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0507 19:01:11.667509    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:11.667509    7688 round_trippers.go:469] Request Headers:
I0507 19:01:11.667571    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:11.667571    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:11.674541    7688 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0507 19:01:11.674541    7688 pod_ready.go:102] pod "etcd-ha-210800-m02" in "kube-system" namespace has status "Ready":"False"
I0507 19:01:12.147267    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:12.147267    7688 round_trippers.go:469] Request Headers:
I0507 19:01:12.147267    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:12.147267    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:12.152968    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0507 19:01:12.155044    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:12.155108    7688 round_trippers.go:469] Request Headers:
I0507 19:01:12.155108    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:12.155108    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:12.166316    7688 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
I0507 19:01:12.648313    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:12.648313    7688 round_trippers.go:469] Request Headers:
I0507 19:01:12.648313    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:12.648313    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:12.653880    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:01:12.655050    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:12.655136    7688 round_trippers.go:469] Request Headers:
I0507 19:01:12.655136    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:12.655136    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:12.661667    7688 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0507 19:01:13.149774    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:13.149845    7688 round_trippers.go:469] Request Headers:
I0507 19:01:13.149845    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:13.149845    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:13.155140    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0507 19:01:13.156282    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:13.156282    7688 round_trippers.go:469] Request Headers:
I0507 19:01:13.156282    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:13.156282    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:13.160463    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:01:13.648566    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:13.648566    7688 round_trippers.go:469] Request Headers:
I0507 19:01:13.648633    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:13.648633    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:13.653490    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:01:13.654661    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:13.654731    7688 round_trippers.go:469] Request Headers:
I0507 19:01:13.654731    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:13.654731    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:13.660996    7688 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0507 19:01:14.150667    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:14.150667    7688 round_trippers.go:469] Request Headers:
I0507 19:01:14.150667    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:14.150667    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:14.155377    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:01:14.157194    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:14.157247    7688 round_trippers.go:469] Request Headers:
I0507 19:01:14.157247    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:14.157247    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:14.161567    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:01:14.162235    7688 pod_ready.go:102] pod "etcd-ha-210800-m02" in "kube-system" namespace has status "Ready":"False"
I0507 19:01:14.652779    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:14.652779    7688 round_trippers.go:469] Request Headers:
I0507 19:01:14.652863    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:14.652863    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:14.658655    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0507 19:01:14.659984    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:14.660094    7688 round_trippers.go:469] Request Headers:
I0507 19:01:14.660094    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:14.660094    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:14.664907    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:01:15.161275    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:15.161275    7688 round_trippers.go:469] Request Headers:
I0507 19:01:15.161275    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:15.161275    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:15.167148    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0507 19:01:15.167808    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:15.167808    7688 round_trippers.go:469] Request Headers:
I0507 19:01:15.167808    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:15.167808    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:15.171760    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0507 19:01:15.650070    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:15.650070    7688 round_trippers.go:469] Request Headers:
I0507 19:01:15.650070    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:15.650070    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:15.655659    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0507 19:01:15.657043    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:15.657043    7688 round_trippers.go:469] Request Headers:
I0507 19:01:15.657043    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:15.657043    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:15.659636    7688 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
I0507 19:01:16.148048    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:16.148048    7688 round_trippers.go:469] Request Headers:
I0507 19:01:16.148121    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:16.148121    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:16.155384    7688 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
I0507 19:01:16.156398    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:16.156398    7688 round_trippers.go:469] Request Headers:
I0507 19:01:16.156398    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:16.156398    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:16.161028    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:01:16.655589    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:16.655589    7688 round_trippers.go:469] Request Headers:
I0507 19:01:16.655589    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:16.655736    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:16.660388    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:01:16.662523    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:16.662523    7688 round_trippers.go:469] Request Headers:
I0507 19:01:16.662523    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:16.662523    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:16.670258    7688 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
I0507 19:01:16.671250    7688 pod_ready.go:102] pod "etcd-ha-210800-m02" in "kube-system" namespace has status "Ready":"False"
I0507 19:01:17.154464    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:17.154464    7688 round_trippers.go:469] Request Headers:
I0507 19:01:17.154464    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:17.154817    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:17.159065    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:01:17.160185    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:17.160185    7688 round_trippers.go:469] Request Headers:
I0507 19:01:17.160185    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:17.160185    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:17.164059    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0507 19:01:17.653562    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:17.653562    7688 round_trippers.go:469] Request Headers:
I0507 19:01:17.653722    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:17.653722    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:17.658673    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:01:17.660083    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:17.660083    7688 round_trippers.go:469] Request Headers:
I0507 19:01:17.660146    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:17.660146    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:17.666725    7688 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0507 19:01:18.153083    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:18.153179    7688 round_trippers.go:469] Request Headers:
I0507 19:01:18.153241    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:18.153241    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:18.158964    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0507 19:01:18.159880    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:18.159947    7688 round_trippers.go:469] Request Headers:
I0507 19:01:18.159947    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:18.159947    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:18.163684    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0507 19:01:18.655345    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:18.655597    7688 round_trippers.go:469] Request Headers:
I0507 19:01:18.655597    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:18.655597    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:18.660194    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:01:18.662363    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:18.662363    7688 round_trippers.go:469] Request Headers:
I0507 19:01:18.662456    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:18.662456    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:18.666827    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:01:19.155182    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:19.155182    7688 round_trippers.go:469] Request Headers:
I0507 19:01:19.155182    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:19.155182    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:19.159930    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:01:19.161711    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:19.161764    7688 round_trippers.go:469] Request Headers:
I0507 19:01:19.161834    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:19.161834    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:19.165064    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0507 19:01:19.166588    7688 pod_ready.go:102] pod "etcd-ha-210800-m02" in "kube-system" namespace has status "Ready":"False"
I0507 19:01:19.652685    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:19.652800    7688 round_trippers.go:469] Request Headers:
I0507 19:01:19.652800    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:19.652800    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:19.657266    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:01:19.659262    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:19.659262    7688 round_trippers.go:469] Request Headers:
I0507 19:01:19.659262    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:19.659262    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:19.663826    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:01:20.149935    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:20.149935    7688 round_trippers.go:469] Request Headers:
I0507 19:01:20.150395    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:20.150395    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:20.155486    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0507 19:01:20.156722    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:20.156722    7688 round_trippers.go:469] Request Headers:
I0507 19:01:20.156722    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:20.156722    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:20.160330    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0507 19:01:20.663087    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:20.663087    7688 round_trippers.go:469] Request Headers:
I0507 19:01:20.663087    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:20.663087    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:20.667470    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0507 19:01:20.669907    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:20.669907    7688 round_trippers.go:469] Request Headers:
I0507 19:01:20.669907    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:20.669907    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:20.675218    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0507 19:01:21.161963    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:21.161963    7688 round_trippers.go:469] Request Headers:
I0507 19:01:21.161963    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:21.161963    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:21.166956    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:01:21.167572    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:21.167572    7688 round_trippers.go:469] Request Headers:
I0507 19:01:21.167572    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:21.167572    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:21.171194    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0507 19:01:21.173047    7688 pod_ready.go:102] pod "etcd-ha-210800-m02" in "kube-system" namespace has status "Ready":"False"
I0507 19:01:21.661874    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:21.661874    7688 round_trippers.go:469] Request Headers:
I0507 19:01:21.661874    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:21.661874    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:21.666262    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:01:21.667591    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:21.667655    7688 round_trippers.go:469] Request Headers:
I0507 19:01:21.667655    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:21.667655    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:21.670966    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0507 19:01:22.147909    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:22.148182    7688 round_trippers.go:469] Request Headers:
I0507 19:01:22.148182    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:22.148182    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:22.153415    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:01:22.154900    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:22.154952    7688 round_trippers.go:469] Request Headers:
I0507 19:01:22.154952    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:22.154952    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:22.162174    7688 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
I0507 19:01:22.654704    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:22.654704    7688 round_trippers.go:469] Request Headers:
I0507 19:01:22.654704    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:22.654704    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:22.659834    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:01:22.660537    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:22.660537    7688 round_trippers.go:469] Request Headers:
I0507 19:01:22.660537    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:22.660537    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:22.665175    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0507 19:01:23.163039    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:23.163152    7688 round_trippers.go:469] Request Headers:
I0507 19:01:23.163152    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:23.163152    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:23.167489    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:01:23.168702    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:23.168702    7688 round_trippers.go:469] Request Headers:
I0507 19:01:23.168702    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:23.168702    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:23.172898    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:01:23.173887    7688 pod_ready.go:102] pod "etcd-ha-210800-m02" in "kube-system" namespace has status "Ready":"False"
I0507 19:01:23.649428    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:23.649428    7688 round_trippers.go:469] Request Headers:
I0507 19:01:23.649428    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:23.649428    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:23.654498    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0507 19:01:23.655678    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:23.655776    7688 round_trippers.go:469] Request Headers:
I0507 19:01:23.655776    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:23.655776    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:23.659848    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:01:24.154343    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:24.154426    7688 round_trippers.go:469] Request Headers:
I0507 19:01:24.154426    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:24.154426    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:24.162273    7688 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
I0507 19:01:24.163180    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:24.163339    7688 round_trippers.go:469] Request Headers:
I0507 19:01:24.163339    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:24.163373    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:24.167398    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:01:24.656267    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:24.656368    7688 round_trippers.go:469] Request Headers:
I0507 19:01:24.656368    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:24.656368    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:24.664297    7688 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
I0507 19:01:24.665215    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:24.665215    7688 round_trippers.go:469] Request Headers:
I0507 19:01:24.665215    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:24.665215    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:24.671846    7688 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0507 19:01:25.156401    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:25.156604    7688 round_trippers.go:469] Request Headers:
I0507 19:01:25.156604    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:25.156604    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:25.166822    7688 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
I0507 19:01:25.167257    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:25.167257    7688 round_trippers.go:469] Request Headers:
I0507 19:01:25.167257    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:25.167257    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:25.170840    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0507 19:01:25.655841    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:25.655841    7688 round_trippers.go:469] Request Headers:
I0507 19:01:25.655841    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:25.655942    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:25.660869    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:01:25.661713    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:25.661713    7688 round_trippers.go:469] Request Headers:
I0507 19:01:25.661713    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:25.661713    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:25.666056    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:01:25.666889    7688 pod_ready.go:102] pod "etcd-ha-210800-m02" in "kube-system" namespace has status "Ready":"False"
I0507 19:01:26.154820    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:26.154882    7688 round_trippers.go:469] Request Headers:
I0507 19:01:26.154882    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:26.154882    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:26.159244    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:01:26.160914    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:26.160914    7688 round_trippers.go:469] Request Headers:
I0507 19:01:26.160914    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:26.160914    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:26.169838    7688 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
I0507 19:01:26.654674    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:26.654801    7688 round_trippers.go:469] Request Headers:
I0507 19:01:26.654801    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:26.654801    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:26.659722    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:01:26.660467    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:26.660467    7688 round_trippers.go:469] Request Headers:
I0507 19:01:26.660467    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:26.660467    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:26.663704    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0507 19:01:27.154573    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:27.154742    7688 round_trippers.go:469] Request Headers:
I0507 19:01:27.154742    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:27.154742    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:27.160230    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0507 19:01:27.161254    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:27.161337    7688 round_trippers.go:469] Request Headers:
I0507 19:01:27.161337    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:27.161337    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:27.165551    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:01:27.652556    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:27.652611    7688 round_trippers.go:469] Request Headers:
I0507 19:01:27.652679    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:27.652679    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:27.662982    7688 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
I0507 19:01:27.664736    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:27.664736    7688 round_trippers.go:469] Request Headers:
I0507 19:01:27.664736    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:27.664736    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:27.669943    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0507 19:01:27.670352    7688 pod_ready.go:102] pod "etcd-ha-210800-m02" in "kube-system" namespace has status "Ready":"False"
I0507 19:01:28.151149    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:28.151149    7688 round_trippers.go:469] Request Headers:
I0507 19:01:28.151149    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:28.151149    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:28.155879    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:01:28.157567    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:28.157567    7688 round_trippers.go:469] Request Headers:
I0507 19:01:28.157653    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:28.157653    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:28.162683    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0507 19:01:28.651858    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:28.651858    7688 round_trippers.go:469] Request Headers:
I0507 19:01:28.651858    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:28.651858    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:28.657104    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:01:28.657794    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:28.657794    7688 round_trippers.go:469] Request Headers:
I0507 19:01:28.657794    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:28.657941    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:28.666434    7688 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
I0507 19:01:29.150960    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:29.151031    7688 round_trippers.go:469] Request Headers:
I0507 19:01:29.151031    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:29.151031    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:29.155811    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:01:29.156671    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:29.156671    7688 round_trippers.go:469] Request Headers:
I0507 19:01:29.156671    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:29.156671    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:29.162599    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0507 19:01:29.652486    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:29.652486    7688 round_trippers.go:469] Request Headers:
I0507 19:01:29.652486    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:29.652579    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:29.657660    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:01:29.658318    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:29.658318    7688 round_trippers.go:469] Request Headers:
I0507 19:01:29.658318    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:29.658318    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:29.662357    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0507 19:01:30.151350    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:30.151635    7688 round_trippers.go:469] Request Headers:
I0507 19:01:30.151635    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:30.151635    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:30.160145    7688 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
I0507 19:01:30.161358    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:30.161410    7688 round_trippers.go:469] Request Headers:
I0507 19:01:30.161410    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:30.161410    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:30.165636    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:01:30.167258    7688 pod_ready.go:102] pod "etcd-ha-210800-m02" in "kube-system" namespace has status "Ready":"False"
I0507 19:01:30.655217    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:30.655217    7688 round_trippers.go:469] Request Headers:
I0507 19:01:30.655329    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:30.655329    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:30.662366    7688 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
I0507 19:01:30.663525    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:30.663586    7688 round_trippers.go:469] Request Headers:
I0507 19:01:30.663586    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:30.663586    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:30.667704    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:01:31.157293    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:31.157293    7688 round_trippers.go:469] Request Headers:
I0507 19:01:31.157293    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:31.157293    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:31.161888    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:01:31.163643    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:31.163643    7688 round_trippers.go:469] Request Headers:
I0507 19:01:31.163643    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:31.163643    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:31.169378    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0507 19:01:31.659105    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:31.659105    7688 round_trippers.go:469] Request Headers:
I0507 19:01:31.659105    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:31.659105    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:31.666334    7688 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
I0507 19:01:31.667371    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:31.667371    7688 round_trippers.go:469] Request Headers:
I0507 19:01:31.667371    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:31.667371    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:31.672310    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:01:32.160753    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:32.161020    7688 round_trippers.go:469] Request Headers:
I0507 19:01:32.161057    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:32.161057    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:32.168635    7688 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0507 19:01:32.170228    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:32.170283    7688 round_trippers.go:469] Request Headers:
I0507 19:01:32.170283    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:32.170283    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:32.174645    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:01:32.176073    7688 pod_ready.go:102] pod "etcd-ha-210800-m02" in "kube-system" namespace has status "Ready":"False"
I0507 19:01:32.663526    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:32.663631    7688 round_trippers.go:469] Request Headers:
I0507 19:01:32.663631    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:32.663631    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:32.667960    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:01:32.669387    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:32.669387    7688 round_trippers.go:469] Request Headers:
I0507 19:01:32.669387    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:32.669387    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:32.686544    7688 round_trippers.go:574] Response Status: 200 OK in 17 milliseconds
I0507 19:01:33.149380    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:33.149571    7688 round_trippers.go:469] Request Headers:
I0507 19:01:33.149643    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:33.149643    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:33.155343    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0507 19:01:33.156733    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:33.156811    7688 round_trippers.go:469] Request Headers:
I0507 19:01:33.156811    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:33.156811    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:33.164746    7688 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
I0507 19:01:33.650773    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:33.650838    7688 round_trippers.go:469] Request Headers:
I0507 19:01:33.650838    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:33.650838    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:33.655719    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:01:33.657258    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:33.657258    7688 round_trippers.go:469] Request Headers:
I0507 19:01:33.657258    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:33.657258    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:33.661826    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:01:34.152388    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:34.152388    7688 round_trippers.go:469] Request Headers:
I0507 19:01:34.152388    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:34.152454    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:34.158941    7688 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0507 19:01:34.160389    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:34.160389    7688 round_trippers.go:469] Request Headers:
I0507 19:01:34.160389    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:34.160389    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:34.164101    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0507 19:01:34.651434    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:34.651434    7688 round_trippers.go:469] Request Headers:
I0507 19:01:34.651434    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:34.651434    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:34.655788    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:01:34.656635    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:34.656635    7688 round_trippers.go:469] Request Headers:
I0507 19:01:34.656635    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:34.656635    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:34.659892    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0507 19:01:34.660960    7688 pod_ready.go:102] pod "etcd-ha-210800-m02" in "kube-system" namespace has status "Ready":"False"
I0507 19:01:35.154159    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:35.154159    7688 round_trippers.go:469] Request Headers:
I0507 19:01:35.154159    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:35.154159    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:35.159696    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0507 19:01:35.161058    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:35.161155    7688 round_trippers.go:469] Request Headers:
I0507 19:01:35.161241    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:35.161241    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:35.165835    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:01:35.654843    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:35.654843    7688 round_trippers.go:469] Request Headers:
I0507 19:01:35.654843    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:35.654843    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:35.659621    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:01:35.661271    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:35.661271    7688 round_trippers.go:469] Request Headers:
I0507 19:01:35.661271    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:35.661271    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:35.665574    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:01:36.157038    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:36.157038    7688 round_trippers.go:469] Request Headers:
I0507 19:01:36.157038    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:36.157038    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:36.160652    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0507 19:01:36.162573    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:36.162573    7688 round_trippers.go:469] Request Headers:
I0507 19:01:36.162573    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:36.162573    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:36.166164    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0507 19:01:36.658534    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:36.658534    7688 round_trippers.go:469] Request Headers:
I0507 19:01:36.658534    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:36.658534    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:36.662280    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0507 19:01:36.664689    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:36.664774    7688 round_trippers.go:469] Request Headers:
I0507 19:01:36.664774    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:36.664849    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:36.672612    7688 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
I0507 19:01:36.674180    7688 pod_ready.go:102] pod "etcd-ha-210800-m02" in "kube-system" namespace has status "Ready":"False"
I0507 19:01:37.161570    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:37.161835    7688 round_trippers.go:469] Request Headers:
I0507 19:01:37.161835    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:37.161835    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:37.166422    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:01:37.168152    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:37.168310    7688 round_trippers.go:469] Request Headers:
I0507 19:01:37.168310    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:37.168310    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:37.173041    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:01:37.660635    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:37.660635    7688 round_trippers.go:469] Request Headers:
I0507 19:01:37.660635    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:37.660635    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:37.665221    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:01:37.667649    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:37.667649    7688 round_trippers.go:469] Request Headers:
I0507 19:01:37.667717    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:37.667717    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:37.673481    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0507 19:01:38.161829    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:38.161829    7688 round_trippers.go:469] Request Headers:
I0507 19:01:38.161829    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:38.161829    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:38.169050    7688 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
I0507 19:01:38.170130    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:38.170189    7688 round_trippers.go:469] Request Headers:
I0507 19:01:38.170189    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:38.170189    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:38.174871    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0507 19:01:38.657673    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:38.657750    7688 round_trippers.go:469] Request Headers:
I0507 19:01:38.657750    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:38.657750    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:38.663018    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0507 19:01:38.664381    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:38.664436    7688 round_trippers.go:469] Request Headers:
I0507 19:01:38.664436    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:38.664500    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:38.668775    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:01:39.156469    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:39.156469    7688 round_trippers.go:469] Request Headers:
I0507 19:01:39.156469    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:39.156469    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:39.161143    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:01:39.161876    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:39.161876    7688 round_trippers.go:469] Request Headers:
I0507 19:01:39.161876    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:39.161876    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:39.165438    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
I0507 19:01:39.166790    7688 pod_ready.go:102] pod "etcd-ha-210800-m02" in "kube-system" namespace has status "Ready":"False"
I0507 19:01:39.654012    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:39.654012    7688 round_trippers.go:469] Request Headers:
I0507 19:01:39.654012    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:39.654012    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:39.659266    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0507 19:01:39.660477    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:39.660542    7688 round_trippers.go:469] Request Headers:
I0507 19:01:39.660542    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:39.660542    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:39.665354    7688 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
I0507 19:01:40.156165    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
I0507 19:01:40.156244    7688 round_trippers.go:469] Request Headers:
I0507 19:01:40.156244    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:40.156244    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:40.161422    7688 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
I0507 19:01:40.162878    7688 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
I0507 19:01:40.162984    7688 round_trippers.go:469] Request Headers:
I0507 19:01:40.162984    7688 round_trippers.go:473]     Accept: application/json, */*
I0507 19:01:40.162984    7688 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
I0507 19:01:40.166371    7688 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
ha_test.go:423: secondary control-plane node start returned an error. args "out/minikube-windows-amd64.exe -p ha-210800 node start m02 -v=7 --alsologtostderr": exit status 1
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-210800 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-210800 status -v=7 --alsologtostderr: context deadline exceeded (0s)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-210800 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-210800 status -v=7 --alsologtostderr: context deadline exceeded (70.7µs)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-210800 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-210800 status -v=7 --alsologtostderr: context deadline exceeded (0s)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-210800 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-210800 status -v=7 --alsologtostderr: context deadline exceeded (0s)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-210800 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-210800 status -v=7 --alsologtostderr: context deadline exceeded (0s)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-210800 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-210800 status -v=7 --alsologtostderr: context deadline exceeded (0s)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-210800 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-210800 status -v=7 --alsologtostderr: context deadline exceeded (135µs)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-210800 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-210800 status -v=7 --alsologtostderr: context deadline exceeded (0s)
ha_test.go:428: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-210800 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-210800 status -v=7 --alsologtostderr: context deadline exceeded (0s)
ha_test.go:432: failed to run minikube status. args "out/minikube-windows-amd64.exe -p ha-210800 status -v=7 --alsologtostderr" : context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-210800 -n ha-210800
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-210800 -n ha-210800: (10.8707007s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-210800 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-210800 logs -n 25: (7.7718458s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| Command |                                                           Args                                                            |  Profile  |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| ssh     | ha-210800 ssh -n                                                                                                          | ha-210800 | minikube5\jenkins | v1.33.0 | 07 May 24 18:53 UTC | 07 May 24 18:53 UTC |
	|         | ha-210800-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| cp      | ha-210800 cp ha-210800-m03:/home/docker/cp-test.txt                                                                       | ha-210800 | minikube5\jenkins | v1.33.0 | 07 May 24 18:53 UTC | 07 May 24 18:53 UTC |
	|         | ha-210800:/home/docker/cp-test_ha-210800-m03_ha-210800.txt                                                                |           |                   |         |                     |                     |
	| ssh     | ha-210800 ssh -n                                                                                                          | ha-210800 | minikube5\jenkins | v1.33.0 | 07 May 24 18:53 UTC | 07 May 24 18:53 UTC |
	|         | ha-210800-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-210800 ssh -n ha-210800 sudo cat                                                                                       | ha-210800 | minikube5\jenkins | v1.33.0 | 07 May 24 18:53 UTC | 07 May 24 18:54 UTC |
	|         | /home/docker/cp-test_ha-210800-m03_ha-210800.txt                                                                          |           |                   |         |                     |                     |
	| cp      | ha-210800 cp ha-210800-m03:/home/docker/cp-test.txt                                                                       | ha-210800 | minikube5\jenkins | v1.33.0 | 07 May 24 18:54 UTC | 07 May 24 18:54 UTC |
	|         | ha-210800-m02:/home/docker/cp-test_ha-210800-m03_ha-210800-m02.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-210800 ssh -n                                                                                                          | ha-210800 | minikube5\jenkins | v1.33.0 | 07 May 24 18:54 UTC | 07 May 24 18:54 UTC |
	|         | ha-210800-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-210800 ssh -n ha-210800-m02 sudo cat                                                                                   | ha-210800 | minikube5\jenkins | v1.33.0 | 07 May 24 18:54 UTC | 07 May 24 18:54 UTC |
	|         | /home/docker/cp-test_ha-210800-m03_ha-210800-m02.txt                                                                      |           |                   |         |                     |                     |
	| cp      | ha-210800 cp ha-210800-m03:/home/docker/cp-test.txt                                                                       | ha-210800 | minikube5\jenkins | v1.33.0 | 07 May 24 18:54 UTC | 07 May 24 18:54 UTC |
	|         | ha-210800-m04:/home/docker/cp-test_ha-210800-m03_ha-210800-m04.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-210800 ssh -n                                                                                                          | ha-210800 | minikube5\jenkins | v1.33.0 | 07 May 24 18:54 UTC | 07 May 24 18:54 UTC |
	|         | ha-210800-m03 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-210800 ssh -n ha-210800-m04 sudo cat                                                                                   | ha-210800 | minikube5\jenkins | v1.33.0 | 07 May 24 18:54 UTC | 07 May 24 18:55 UTC |
	|         | /home/docker/cp-test_ha-210800-m03_ha-210800-m04.txt                                                                      |           |                   |         |                     |                     |
	| cp      | ha-210800 cp testdata\cp-test.txt                                                                                         | ha-210800 | minikube5\jenkins | v1.33.0 | 07 May 24 18:55 UTC | 07 May 24 18:55 UTC |
	|         | ha-210800-m04:/home/docker/cp-test.txt                                                                                    |           |                   |         |                     |                     |
	| ssh     | ha-210800 ssh -n                                                                                                          | ha-210800 | minikube5\jenkins | v1.33.0 | 07 May 24 18:55 UTC | 07 May 24 18:55 UTC |
	|         | ha-210800-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| cp      | ha-210800 cp ha-210800-m04:/home/docker/cp-test.txt                                                                       | ha-210800 | minikube5\jenkins | v1.33.0 | 07 May 24 18:55 UTC | 07 May 24 18:55 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3684481978\001\cp-test_ha-210800-m04.txt |           |                   |         |                     |                     |
	| ssh     | ha-210800 ssh -n                                                                                                          | ha-210800 | minikube5\jenkins | v1.33.0 | 07 May 24 18:55 UTC | 07 May 24 18:55 UTC |
	|         | ha-210800-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| cp      | ha-210800 cp ha-210800-m04:/home/docker/cp-test.txt                                                                       | ha-210800 | minikube5\jenkins | v1.33.0 | 07 May 24 18:55 UTC | 07 May 24 18:55 UTC |
	|         | ha-210800:/home/docker/cp-test_ha-210800-m04_ha-210800.txt                                                                |           |                   |         |                     |                     |
	| ssh     | ha-210800 ssh -n                                                                                                          | ha-210800 | minikube5\jenkins | v1.33.0 | 07 May 24 18:55 UTC | 07 May 24 18:56 UTC |
	|         | ha-210800-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-210800 ssh -n ha-210800 sudo cat                                                                                       | ha-210800 | minikube5\jenkins | v1.33.0 | 07 May 24 18:56 UTC | 07 May 24 18:56 UTC |
	|         | /home/docker/cp-test_ha-210800-m04_ha-210800.txt                                                                          |           |                   |         |                     |                     |
	| cp      | ha-210800 cp ha-210800-m04:/home/docker/cp-test.txt                                                                       | ha-210800 | minikube5\jenkins | v1.33.0 | 07 May 24 18:56 UTC | 07 May 24 18:56 UTC |
	|         | ha-210800-m02:/home/docker/cp-test_ha-210800-m04_ha-210800-m02.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-210800 ssh -n                                                                                                          | ha-210800 | minikube5\jenkins | v1.33.0 | 07 May 24 18:56 UTC | 07 May 24 18:56 UTC |
	|         | ha-210800-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-210800 ssh -n ha-210800-m02 sudo cat                                                                                   | ha-210800 | minikube5\jenkins | v1.33.0 | 07 May 24 18:56 UTC | 07 May 24 18:56 UTC |
	|         | /home/docker/cp-test_ha-210800-m04_ha-210800-m02.txt                                                                      |           |                   |         |                     |                     |
	| cp      | ha-210800 cp ha-210800-m04:/home/docker/cp-test.txt                                                                       | ha-210800 | minikube5\jenkins | v1.33.0 | 07 May 24 18:56 UTC | 07 May 24 18:57 UTC |
	|         | ha-210800-m03:/home/docker/cp-test_ha-210800-m04_ha-210800-m03.txt                                                        |           |                   |         |                     |                     |
	| ssh     | ha-210800 ssh -n                                                                                                          | ha-210800 | minikube5\jenkins | v1.33.0 | 07 May 24 18:57 UTC | 07 May 24 18:57 UTC |
	|         | ha-210800-m04 sudo cat                                                                                                    |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                  |           |                   |         |                     |                     |
	| ssh     | ha-210800 ssh -n ha-210800-m03 sudo cat                                                                                   | ha-210800 | minikube5\jenkins | v1.33.0 | 07 May 24 18:57 UTC | 07 May 24 18:57 UTC |
	|         | /home/docker/cp-test_ha-210800-m04_ha-210800-m03.txt                                                                      |           |                   |         |                     |                     |
	| node    | ha-210800 node stop m02 -v=7                                                                                              | ha-210800 | minikube5\jenkins | v1.33.0 | 07 May 24 18:57 UTC | 07 May 24 18:57 UTC |
	|         | --alsologtostderr                                                                                                         |           |                   |         |                     |                     |
	| node    | ha-210800 node start m02 -v=7                                                                                             | ha-210800 | minikube5\jenkins | v1.33.0 | 07 May 24 18:58 UTC |                     |
	|         | --alsologtostderr                                                                                                         |           |                   |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/07 18:31:40
	Running on machine: minikube5
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0507 18:31:40.267319    8396 out.go:291] Setting OutFile to fd 792 ...
	I0507 18:31:40.268458    8396 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 18:31:40.268458    8396 out.go:304] Setting ErrFile to fd 916...
	I0507 18:31:40.268458    8396 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 18:31:40.286174    8396 out.go:298] Setting JSON to false
	I0507 18:31:40.293256    8396 start.go:129] hostinfo: {"hostname":"minikube5","uptime":22618,"bootTime":1715084081,"procs":193,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0507 18:31:40.293330    8396 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0507 18:31:40.319405    8396 out.go:177] * [ha-210800] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0507 18:31:40.324555    8396 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0507 18:31:40.323436    8396 notify.go:220] Checking for updates...
	I0507 18:31:40.327534    8396 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0507 18:31:40.329963    8396 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0507 18:31:40.332037    8396 out.go:177]   - MINIKUBE_LOCATION=18804
	I0507 18:31:40.341206    8396 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0507 18:31:40.346242    8396 driver.go:392] Setting default libvirt URI to qemu:///system
	I0507 18:31:45.132150    8396 out.go:177] * Using the hyperv driver based on user configuration
	I0507 18:31:45.134279    8396 start.go:297] selected driver: hyperv
	I0507 18:31:45.134331    8396 start.go:901] validating driver "hyperv" against <nil>
	I0507 18:31:45.134331    8396 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0507 18:31:45.180293    8396 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0507 18:31:45.181141    8396 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0507 18:31:45.181141    8396 cni.go:84] Creating CNI manager for ""
	I0507 18:31:45.181141    8396 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0507 18:31:45.181141    8396 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0507 18:31:45.181601    8396 start.go:340] cluster config:
	{Name:ha-210800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-210800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0507 18:31:45.181601    8396 iso.go:125] acquiring lock: {Name:mk4977609d05da04fcecf95837b3381fb1950afd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0507 18:31:45.187134    8396 out.go:177] * Starting "ha-210800" primary control-plane node in "ha-210800" cluster
	I0507 18:31:45.189802    8396 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0507 18:31:45.190005    8396 preload.go:147] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0507 18:31:45.190005    8396 cache.go:56] Caching tarball of preloaded images
	I0507 18:31:45.190302    8396 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0507 18:31:45.190455    8396 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0507 18:31:45.190996    8396 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\config.json ...
	I0507 18:31:45.191191    8396 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\config.json: {Name:mkd92c4604bf507480a04d8ffc294646ec1e422b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 18:31:45.192083    8396 start.go:360] acquireMachinesLock for ha-210800: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 18:31:45.192083    8396 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-210800"
	I0507 18:31:45.192083    8396 start.go:93] Provisioning new machine with config: &{Name:ha-210800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.0 ClusterName:ha-210800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0507 18:31:45.192083    8396 start.go:125] createHost starting for "" (driver="hyperv")
	I0507 18:31:45.194277    8396 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0507 18:31:45.194277    8396 start.go:159] libmachine.API.Create for "ha-210800" (driver="hyperv")
	I0507 18:31:45.194277    8396 client.go:168] LocalClient.Create starting
	I0507 18:31:45.195275    8396 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0507 18:31:45.195275    8396 main.go:141] libmachine: Decoding PEM data...
	I0507 18:31:45.195275    8396 main.go:141] libmachine: Parsing certificate...
	I0507 18:31:45.195799    8396 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0507 18:31:45.195835    8396 main.go:141] libmachine: Decoding PEM data...
	I0507 18:31:45.195835    8396 main.go:141] libmachine: Parsing certificate...
	I0507 18:31:45.195835    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0507 18:31:46.990897    8396 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0507 18:31:46.990897    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:31:46.991848    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0507 18:31:48.515381    8396 main.go:141] libmachine: [stdout =====>] : False
	
	I0507 18:31:48.516214    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:31:48.516214    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0507 18:31:49.807093    8396 main.go:141] libmachine: [stdout =====>] : True
	
	I0507 18:31:49.807093    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:31:49.807093    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0507 18:31:52.978461    8396 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0507 18:31:52.978461    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:31:52.981010    8396 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso...
	I0507 18:31:53.310864    8396 main.go:141] libmachine: Creating SSH key...
	I0507 18:31:53.566648    8396 main.go:141] libmachine: Creating VM...
	I0507 18:31:53.566648    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0507 18:31:56.065672    8396 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0507 18:31:56.065672    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:31:56.066034    8396 main.go:141] libmachine: Using switch "Default Switch"
	I0507 18:31:56.066231    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0507 18:31:57.622894    8396 main.go:141] libmachine: [stdout =====>] : True
	
	I0507 18:31:57.622894    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:31:57.622970    8396 main.go:141] libmachine: Creating VHD
	I0507 18:31:57.622970    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800\fixed.vhd' -SizeBytes 10MB -Fixed
	I0507 18:32:01.075430    8396 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 34D3DCE6-8404-4989-9D3E-495162DF6FFE
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0507 18:32:01.075430    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:32:01.075820    8396 main.go:141] libmachine: Writing magic tar header
	I0507 18:32:01.075920    8396 main.go:141] libmachine: Writing SSH key tar header
	I0507 18:32:01.086634    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800\disk.vhd' -VHDType Dynamic -DeleteSource
	I0507 18:32:04.117009    8396 main.go:141] libmachine: [stdout =====>] : 
	I0507 18:32:04.117009    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:32:04.117321    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800\disk.vhd' -SizeBytes 20000MB
	I0507 18:32:06.500411    8396 main.go:141] libmachine: [stdout =====>] : 
	I0507 18:32:06.501049    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:32:06.501121    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-210800 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0507 18:32:09.767260    8396 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-210800 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0507 18:32:09.767260    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:32:09.768258    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-210800 -DynamicMemoryEnabled $false
	I0507 18:32:11.836796    8396 main.go:141] libmachine: [stdout =====>] : 
	I0507 18:32:11.836796    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:32:11.837329    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-210800 -Count 2
	I0507 18:32:13.797109    8396 main.go:141] libmachine: [stdout =====>] : 
	I0507 18:32:13.797109    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:32:13.797610    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-210800 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800\boot2docker.iso'
	I0507 18:32:16.071910    8396 main.go:141] libmachine: [stdout =====>] : 
	I0507 18:32:16.071910    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:32:16.071910    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-210800 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800\disk.vhd'
	I0507 18:32:18.412166    8396 main.go:141] libmachine: [stdout =====>] : 
	I0507 18:32:18.412166    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:32:18.412166    8396 main.go:141] libmachine: Starting VM...
	I0507 18:32:18.412455    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-210800
	I0507 18:32:21.217165    8396 main.go:141] libmachine: [stdout =====>] : 
	I0507 18:32:21.217165    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:32:21.217165    8396 main.go:141] libmachine: Waiting for host to start...
	I0507 18:32:21.217872    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800 ).state
	I0507 18:32:23.247435    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:32:23.247435    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:32:23.248062    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800 ).networkadapters[0]).ipaddresses[0]
	I0507 18:32:25.523052    8396 main.go:141] libmachine: [stdout =====>] : 
	I0507 18:32:25.523052    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:32:26.537349    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800 ).state
	I0507 18:32:28.501694    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:32:28.501694    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:32:28.501694    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800 ).networkadapters[0]).ipaddresses[0]
	I0507 18:32:30.793866    8396 main.go:141] libmachine: [stdout =====>] : 
	I0507 18:32:30.793903    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:32:31.796667    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800 ).state
	I0507 18:32:33.782694    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:32:33.782694    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:32:33.782694    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800 ).networkadapters[0]).ipaddresses[0]
	I0507 18:32:36.030848    8396 main.go:141] libmachine: [stdout =====>] : 
	I0507 18:32:36.031871    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:32:37.048032    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800 ).state
	I0507 18:32:38.998796    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:32:38.998796    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:32:38.999870    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800 ).networkadapters[0]).ipaddresses[0]
	I0507 18:32:41.247235    8396 main.go:141] libmachine: [stdout =====>] : 
	I0507 18:32:41.247235    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:32:42.259808    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800 ).state
	I0507 18:32:44.263282    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:32:44.263282    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:32:44.263398    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800 ).networkadapters[0]).ipaddresses[0]
	I0507 18:32:46.596942    8396 main.go:141] libmachine: [stdout =====>] : 172.19.132.69
	
	I0507 18:32:46.596942    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:32:46.597430    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800 ).state
	I0507 18:32:48.496366    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:32:48.496428    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:32:48.496428    8396 machine.go:94] provisionDockerMachine start ...
	I0507 18:32:48.496428    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800 ).state
	I0507 18:32:50.432228    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:32:50.433158    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:32:50.433158    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800 ).networkadapters[0]).ipaddresses[0]
	I0507 18:32:52.710668    8396 main.go:141] libmachine: [stdout =====>] : 172.19.132.69
	
	I0507 18:32:52.710668    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:32:52.716204    8396 main.go:141] libmachine: Using SSH client type: native
	I0507 18:32:52.728730    8396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.132.69 22 <nil> <nil>}
	I0507 18:32:52.728730    8396 main.go:141] libmachine: About to run SSH command:
	hostname
	I0507 18:32:52.872442    8396 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0507 18:32:52.872442    8396 buildroot.go:166] provisioning hostname "ha-210800"
	I0507 18:32:52.872442    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800 ).state
	I0507 18:32:54.725719    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:32:54.725795    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:32:54.725795    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800 ).networkadapters[0]).ipaddresses[0]
	I0507 18:32:57.035116    8396 main.go:141] libmachine: [stdout =====>] : 172.19.132.69
	
	I0507 18:32:57.035116    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:32:57.039171    8396 main.go:141] libmachine: Using SSH client type: native
	I0507 18:32:57.039790    8396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.132.69 22 <nil> <nil>}
	I0507 18:32:57.039790    8396 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-210800 && echo "ha-210800" | sudo tee /etc/hostname
	I0507 18:32:57.212271    8396 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-210800
	
	I0507 18:32:57.212507    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800 ).state
	I0507 18:32:59.089825    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:32:59.090473    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:32:59.090554    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800 ).networkadapters[0]).ipaddresses[0]
	I0507 18:33:01.378519    8396 main.go:141] libmachine: [stdout =====>] : 172.19.132.69
	
	I0507 18:33:01.378546    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:33:01.382441    8396 main.go:141] libmachine: Using SSH client type: native
	I0507 18:33:01.383067    8396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.132.69 22 <nil> <nil>}
	I0507 18:33:01.383067    8396 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-210800' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-210800/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-210800' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0507 18:33:01.536192    8396 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0507 18:33:01.536275    8396 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0507 18:33:01.536275    8396 buildroot.go:174] setting up certificates
	I0507 18:33:01.536374    8396 provision.go:84] configureAuth start
	I0507 18:33:01.536494    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800 ).state
	I0507 18:33:03.485418    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:33:03.485418    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:33:03.486470    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800 ).networkadapters[0]).ipaddresses[0]
	I0507 18:33:05.836813    8396 main.go:141] libmachine: [stdout =====>] : 172.19.132.69
	
	I0507 18:33:05.836813    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:33:05.836920    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800 ).state
	I0507 18:33:07.770118    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:33:07.770118    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:33:07.770227    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800 ).networkadapters[0]).ipaddresses[0]
	I0507 18:33:10.127424    8396 main.go:141] libmachine: [stdout =====>] : 172.19.132.69
	
	I0507 18:33:10.127424    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:33:10.128242    8396 provision.go:143] copyHostCerts
	I0507 18:33:10.128437    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0507 18:33:10.128816    8396 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0507 18:33:10.128888    8396 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0507 18:33:10.129437    8396 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0507 18:33:10.130813    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0507 18:33:10.131166    8396 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0507 18:33:10.131166    8396 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0507 18:33:10.131593    8396 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0507 18:33:10.132747    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0507 18:33:10.133396    8396 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0507 18:33:10.133396    8396 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0507 18:33:10.133396    8396 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0507 18:33:10.134655    8396 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-210800 san=[127.0.0.1 172.19.132.69 ha-210800 localhost minikube]
	I0507 18:33:10.415997    8396 provision.go:177] copyRemoteCerts
	I0507 18:33:10.423371    8396 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0507 18:33:10.423371    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800 ).state
	I0507 18:33:12.385601    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:33:12.385674    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:33:12.385745    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800 ).networkadapters[0]).ipaddresses[0]
	I0507 18:33:14.679663    8396 main.go:141] libmachine: [stdout =====>] : 172.19.132.69
	
	I0507 18:33:14.679663    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:33:14.679663    8396 sshutil.go:53] new ssh client: &{IP:172.19.132.69 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800\id_rsa Username:docker}
	I0507 18:33:14.783974    8396 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.3603061s)
	I0507 18:33:14.783974    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0507 18:33:14.783974    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0507 18:33:14.824071    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0507 18:33:14.824983    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1200 bytes)
	I0507 18:33:14.879080    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0507 18:33:14.879491    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0507 18:33:14.924186    8396 provision.go:87] duration metric: took 13.3868373s to configureAuth
	I0507 18:33:14.924280    8396 buildroot.go:189] setting minikube options for container-runtime
	I0507 18:33:14.924509    8396 config.go:182] Loaded profile config "ha-210800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 18:33:14.924509    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800 ).state
	I0507 18:33:16.861966    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:33:16.862036    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:33:16.862036    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800 ).networkadapters[0]).ipaddresses[0]
	I0507 18:33:19.148757    8396 main.go:141] libmachine: [stdout =====>] : 172.19.132.69
	
	I0507 18:33:19.148757    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:33:19.154957    8396 main.go:141] libmachine: Using SSH client type: native
	I0507 18:33:19.155062    8396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.132.69 22 <nil> <nil>}
	I0507 18:33:19.155062    8396 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0507 18:33:19.292639    8396 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0507 18:33:19.292639    8396 buildroot.go:70] root file system type: tmpfs
	I0507 18:33:19.292639    8396 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0507 18:33:19.293173    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800 ).state
	I0507 18:33:21.137238    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:33:21.137238    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:33:21.137238    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800 ).networkadapters[0]).ipaddresses[0]
	I0507 18:33:23.429102    8396 main.go:141] libmachine: [stdout =====>] : 172.19.132.69
	
	I0507 18:33:23.429102    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:33:23.433478    8396 main.go:141] libmachine: Using SSH client type: native
	I0507 18:33:23.434175    8396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.132.69 22 <nil> <nil>}
	I0507 18:33:23.434175    8396 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0507 18:33:23.598916    8396 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0507 18:33:23.598916    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800 ).state
	I0507 18:33:25.537242    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:33:25.537242    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:33:25.537513    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800 ).networkadapters[0]).ipaddresses[0]
	I0507 18:33:27.814354    8396 main.go:141] libmachine: [stdout =====>] : 172.19.132.69
	
	I0507 18:33:27.814354    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:33:27.818357    8396 main.go:141] libmachine: Using SSH client type: native
	I0507 18:33:27.818520    8396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.132.69 22 <nil> <nil>}
	I0507 18:33:27.818520    8396 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0507 18:33:29.905831    8396 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0507 18:33:29.905831    8396 machine.go:97] duration metric: took 41.4065907s to provisionDockerMachine
	I0507 18:33:29.905831    8396 client.go:171] duration metric: took 1m44.7044941s to LocalClient.Create
	I0507 18:33:29.905831    8396 start.go:167] duration metric: took 1m44.7044941s to libmachine.API.Create "ha-210800"
	I0507 18:33:29.905831    8396 start.go:293] postStartSetup for "ha-210800" (driver="hyperv")
	I0507 18:33:29.906450    8396 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0507 18:33:29.916237    8396 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0507 18:33:29.916237    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800 ).state
	I0507 18:33:31.820974    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:33:31.820974    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:33:31.822033    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800 ).networkadapters[0]).ipaddresses[0]
	I0507 18:33:34.093386    8396 main.go:141] libmachine: [stdout =====>] : 172.19.132.69
	
	I0507 18:33:34.093386    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:33:34.093386    8396 sshutil.go:53] new ssh client: &{IP:172.19.132.69 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800\id_rsa Username:docker}
	I0507 18:33:34.198272    8396 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.2817432s)
	I0507 18:33:34.209356    8396 ssh_runner.go:195] Run: cat /etc/os-release
	I0507 18:33:34.216100    8396 info.go:137] Remote host: Buildroot 2023.02.9
	I0507 18:33:34.216100    8396 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0507 18:33:34.216100    8396 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0507 18:33:34.217092    8396 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem -> 99922.pem in /etc/ssl/certs
	I0507 18:33:34.217179    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem -> /etc/ssl/certs/99922.pem
	I0507 18:33:34.226054    8396 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0507 18:33:34.242554    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem --> /etc/ssl/certs/99922.pem (1708 bytes)
	I0507 18:33:34.283920    8396 start.go:296] duration metric: took 4.3771294s for postStartSetup
	I0507 18:33:34.287294    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800 ).state
	I0507 18:33:36.226853    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:33:36.226853    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:33:36.227084    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800 ).networkadapters[0]).ipaddresses[0]
	I0507 18:33:38.586722    8396 main.go:141] libmachine: [stdout =====>] : 172.19.132.69
	
	I0507 18:33:38.586722    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:33:38.586722    8396 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\config.json ...
	I0507 18:33:38.590493    8396 start.go:128] duration metric: took 1m53.390759s to createHost
	I0507 18:33:38.590493    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800 ).state
	I0507 18:33:40.538496    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:33:40.538496    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:33:40.538562    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800 ).networkadapters[0]).ipaddresses[0]
	I0507 18:33:42.879129    8396 main.go:141] libmachine: [stdout =====>] : 172.19.132.69
	
	I0507 18:33:42.879129    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:33:42.883780    8396 main.go:141] libmachine: Using SSH client type: native
	I0507 18:33:42.884386    8396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.132.69 22 <nil> <nil>}
	I0507 18:33:42.884386    8396 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0507 18:33:43.023569    8396 main.go:141] libmachine: SSH cmd err, output: <nil>: 1715106823.242283585
	
	I0507 18:33:43.023662    8396 fix.go:216] guest clock: 1715106823.242283585
	I0507 18:33:43.023662    8396 fix.go:229] Guest: 2024-05-07 18:33:43.242283585 +0000 UTC Remote: 2024-05-07 18:33:38.5904938 +0000 UTC m=+118.435968701 (delta=4.651789785s)
	I0507 18:33:43.023662    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800 ).state
	I0507 18:33:44.945633    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:33:44.945633    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:33:44.946422    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800 ).networkadapters[0]).ipaddresses[0]
	I0507 18:33:47.234846    8396 main.go:141] libmachine: [stdout =====>] : 172.19.132.69
	
	I0507 18:33:47.234846    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:33:47.238995    8396 main.go:141] libmachine: Using SSH client type: native
	I0507 18:33:47.239318    8396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.132.69 22 <nil> <nil>}
	I0507 18:33:47.239393    8396 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1715106823
	I0507 18:33:47.386230    8396 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue May  7 18:33:43 UTC 2024
	
	I0507 18:33:47.386230    8396 fix.go:236] clock set: Tue May  7 18:33:43 UTC 2024
	 (err=<nil>)
	I0507 18:33:47.386230    8396 start.go:83] releasing machines lock for "ha-210800", held for 2m2.1858946s
	I0507 18:33:47.386770    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800 ).state
	I0507 18:33:49.320393    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:33:49.320393    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:33:49.321325    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800 ).networkadapters[0]).ipaddresses[0]
	I0507 18:33:51.655612    8396 main.go:141] libmachine: [stdout =====>] : 172.19.132.69
	
	I0507 18:33:51.655612    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:33:51.659402    8396 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0507 18:33:51.659488    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800 ).state
	I0507 18:33:51.666467    8396 ssh_runner.go:195] Run: cat /version.json
	I0507 18:33:51.666467    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800 ).state
	I0507 18:33:53.608511    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:33:53.608511    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:33:53.608511    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800 ).networkadapters[0]).ipaddresses[0]
	I0507 18:33:53.627212    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:33:53.627212    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:33:53.628025    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800 ).networkadapters[0]).ipaddresses[0]
	I0507 18:33:55.978727    8396 main.go:141] libmachine: [stdout =====>] : 172.19.132.69
	
	I0507 18:33:55.978931    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:33:55.978931    8396 sshutil.go:53] new ssh client: &{IP:172.19.132.69 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800\id_rsa Username:docker}
	I0507 18:33:56.005146    8396 main.go:141] libmachine: [stdout =====>] : 172.19.132.69
	
	I0507 18:33:56.005146    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:33:56.005771    8396 sshutil.go:53] new ssh client: &{IP:172.19.132.69 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800\id_rsa Username:docker}
	I0507 18:33:56.087130    8396 ssh_runner.go:235] Completed: cat /version.json: (4.4203604s)
	I0507 18:33:56.095373    8396 ssh_runner.go:195] Run: systemctl --version
	I0507 18:33:56.156007    8396 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.4962973s)
	I0507 18:33:56.166277    8396 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0507 18:33:56.175322    8396 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0507 18:33:56.184724    8396 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0507 18:33:56.212937    8396 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0507 18:33:56.212937    8396 start.go:494] detecting cgroup driver to use...
	I0507 18:33:56.212937    8396 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0507 18:33:56.263768    8396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0507 18:33:56.293164    8396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0507 18:33:56.312005    8396 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0507 18:33:56.323112    8396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0507 18:33:56.352226    8396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0507 18:33:56.383447    8396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0507 18:33:56.410638    8396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0507 18:33:56.438750    8396 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0507 18:33:56.465337    8396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0507 18:33:56.497084    8396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0507 18:33:56.524064    8396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0507 18:33:56.555335    8396 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0507 18:33:56.579709    8396 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0507 18:33:56.603700    8396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 18:33:56.803752    8396 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0507 18:33:56.830461    8396 start.go:494] detecting cgroup driver to use...
	I0507 18:33:56.841791    8396 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0507 18:33:56.872976    8396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0507 18:33:56.902669    8396 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0507 18:33:56.946818    8396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0507 18:33:56.982105    8396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0507 18:33:57.014666    8396 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0507 18:33:57.076148    8396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0507 18:33:57.099890    8396 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0507 18:33:57.140585    8396 ssh_runner.go:195] Run: which cri-dockerd
	I0507 18:33:57.155485    8396 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0507 18:33:57.172359    8396 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0507 18:33:57.210978    8396 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0507 18:33:57.402887    8396 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0507 18:33:57.568918    8396 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0507 18:33:57.569264    8396 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0507 18:33:57.608281    8396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 18:33:57.786235    8396 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0507 18:34:00.287567    8396 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5011606s)
	I0507 18:34:00.302671    8396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0507 18:34:00.341568    8396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0507 18:34:00.376727    8396 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0507 18:34:00.559799    8396 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0507 18:34:00.741447    8396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 18:34:00.924723    8396 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0507 18:34:00.964793    8396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0507 18:34:00.998199    8396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 18:34:01.178832    8396 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0507 18:34:01.280841    8396 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0507 18:34:01.291060    8396 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0507 18:34:01.298536    8396 start.go:562] Will wait 60s for crictl version
	I0507 18:34:01.309109    8396 ssh_runner.go:195] Run: which crictl
	I0507 18:34:01.324588    8396 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0507 18:34:01.382841    8396 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0507 18:34:01.390260    8396 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0507 18:34:01.427836    8396 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0507 18:34:01.458548    8396 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0507 18:34:01.458548    8396 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0507 18:34:01.465200    8396 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0507 18:34:01.465200    8396 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0507 18:34:01.465200    8396 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0507 18:34:01.465200    8396 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:a3:a5:4f Flags:up|broadcast|multicast|running}
	I0507 18:34:01.467565    8396 ip.go:210] interface addr: fe80::1edb:f5fd:c218:d8d2/64
	I0507 18:34:01.467565    8396 ip.go:210] interface addr: 172.19.128.1/20
	I0507 18:34:01.475594    8396 ssh_runner.go:195] Run: grep 172.19.128.1	host.minikube.internal$ /etc/hosts
	I0507 18:34:01.481961    8396 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.128.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0507 18:34:01.514485    8396 kubeadm.go:877] updating cluster {Name:ha-210800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0
ClusterName:ha-210800 Namespace:default APIServerHAVIP:172.19.143.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.132.69 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0507 18:34:01.514485    8396 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0507 18:34:01.521486    8396 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0507 18:34:01.547082    8396 docker.go:685] Got preloaded images: 
	I0507 18:34:01.547156    8396 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.0 wasn't preloaded
	I0507 18:34:01.559292    8396 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0507 18:34:01.590071    8396 ssh_runner.go:195] Run: which lz4
	I0507 18:34:01.596074    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0507 18:34:01.604654    8396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0507 18:34:01.610591    8396 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0507 18:34:01.610591    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359556852 bytes)
	I0507 18:34:03.028291    8396 docker.go:649] duration metric: took 1.43166s to copy over tarball
	I0507 18:34:03.039375    8396 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0507 18:34:12.483258    8396 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (9.4430391s)
	I0507 18:34:12.483258    8396 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0507 18:34:12.542216    8396 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0507 18:34:12.559502    8396 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0507 18:34:12.603692    8396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 18:34:12.787347    8396 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0507 18:34:16.129136    8396 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.3415599s)
	I0507 18:34:16.137182    8396 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0507 18:34:16.159345    8396 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0507 18:34:16.159409    8396 cache_images.go:84] Images are preloaded, skipping loading
	I0507 18:34:16.159409    8396 kubeadm.go:928] updating node { 172.19.132.69 8443 v1.30.0 docker true true} ...
	I0507 18:34:16.159658    8396 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-210800 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.132.69
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-210800 Namespace:default APIServerHAVIP:172.19.143.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0507 18:34:16.166615    8396 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0507 18:34:16.198247    8396 cni.go:84] Creating CNI manager for ""
	I0507 18:34:16.198247    8396 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0507 18:34:16.198247    8396 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0507 18:34:16.198247    8396 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.19.132.69 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-210800 NodeName:ha-210800 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.19.132.69"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.19.132.69 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0507 18:34:16.198247    8396 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.19.132.69
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-210800"
	  kubeletExtraArgs:
	    node-ip: 172.19.132.69
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.19.132.69"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0507 18:34:16.198247    8396 kube-vip.go:111] generating kube-vip config ...
	I0507 18:34:16.207240    8396 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0507 18:34:16.230277    8396 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0507 18:34:16.231265    8396 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.19.143.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0507 18:34:16.242254    8396 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0507 18:34:16.264257    8396 binaries.go:44] Found k8s binaries, skipping transfer
	I0507 18:34:16.274246    8396 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0507 18:34:16.295434    8396 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0507 18:34:16.324449    8396 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0507 18:34:16.359021    8396 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2154 bytes)
	I0507 18:34:16.387727    8396 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1352 bytes)
	I0507 18:34:16.426454    8396 ssh_runner.go:195] Run: grep 172.19.143.254	control-plane.minikube.internal$ /etc/hosts
	I0507 18:34:16.432961    8396 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.143.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0507 18:34:16.459827    8396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 18:34:16.629800    8396 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0507 18:34:16.654738    8396 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800 for IP: 172.19.132.69
	I0507 18:34:16.654932    8396 certs.go:194] generating shared ca certs ...
	I0507 18:34:16.654932    8396 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 18:34:16.655753    8396 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0507 18:34:16.656180    8396 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0507 18:34:16.656382    8396 certs.go:256] generating profile certs ...
	I0507 18:34:16.657119    8396 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\client.key
	I0507 18:34:16.657238    8396 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\client.crt with IP's: []
	I0507 18:34:16.732052    8396 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\client.crt ...
	I0507 18:34:16.732052    8396 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\client.crt: {Name:mk59fbe227eecdee4ffc9752f8af7db1e6cae876 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 18:34:16.733685    8396 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\client.key ...
	I0507 18:34:16.733685    8396 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\client.key: {Name:mkc8e35621f7e8f0fa74ff63f98b71222545a7b1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 18:34:16.735467    8396 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.key.6c1d5e03
	I0507 18:34:16.735467    8396 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.crt.6c1d5e03 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.19.132.69 172.19.143.254]
	I0507 18:34:16.992191    8396 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.crt.6c1d5e03 ...
	I0507 18:34:16.992191    8396 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.crt.6c1d5e03: {Name:mke632d2d15fa0eedb6c0c6aa4eefca3f13e4bd2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 18:34:16.994139    8396 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.key.6c1d5e03 ...
	I0507 18:34:16.994139    8396 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.key.6c1d5e03: {Name:mk023deb57a6234e869043d6d13dae2827f4a2e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 18:34:16.994576    8396 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.crt.6c1d5e03 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.crt
	I0507 18:34:17.007850    8396 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.key.6c1d5e03 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.key
	I0507 18:34:17.008722    8396 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\proxy-client.key
	I0507 18:34:17.008722    8396 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\proxy-client.crt with IP's: []
	I0507 18:34:17.383476    8396 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\proxy-client.crt ...
	I0507 18:34:17.383476    8396 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\proxy-client.crt: {Name:mk1a84aa147a934c266b8199690fcdbca720b9f3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 18:34:17.385483    8396 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\proxy-client.key ...
	I0507 18:34:17.385483    8396 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\proxy-client.key: {Name:mkbc78be0b182612ff8178f9381e616ab597e2c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 18:34:17.386494    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0507 18:34:17.387486    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0507 18:34:17.387486    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0507 18:34:17.387486    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0507 18:34:17.387486    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0507 18:34:17.387486    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0507 18:34:17.387486    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0507 18:34:17.396480    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0507 18:34:17.397138    8396 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\9992.pem (1338 bytes)
	W0507 18:34:17.397534    8396 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\9992_empty.pem, impossibly tiny 0 bytes
	I0507 18:34:17.397534    8396 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0507 18:34:17.397860    8396 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0507 18:34:17.398083    8396 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0507 18:34:17.398083    8396 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0507 18:34:17.398500    8396 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem (1708 bytes)
	I0507 18:34:17.398747    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem -> /usr/share/ca-certificates/99922.pem
	I0507 18:34:17.398889    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0507 18:34:17.398889    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\9992.pem -> /usr/share/ca-certificates/9992.pem
	I0507 18:34:17.399510    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0507 18:34:17.447078    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0507 18:34:17.485076    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0507 18:34:17.522069    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0507 18:34:17.568219    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0507 18:34:17.613489    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0507 18:34:17.661087    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0507 18:34:17.700309    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0507 18:34:17.742476    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem --> /usr/share/ca-certificates/99922.pem (1708 bytes)
	I0507 18:34:17.787798    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0507 18:34:17.830594    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\9992.pem --> /usr/share/ca-certificates/9992.pem (1338 bytes)
	I0507 18:34:17.870851    8396 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0507 18:34:17.911076    8396 ssh_runner.go:195] Run: openssl version
	I0507 18:34:17.931697    8396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/99922.pem && ln -fs /usr/share/ca-certificates/99922.pem /etc/ssl/certs/99922.pem"
	I0507 18:34:17.961083    8396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/99922.pem
	I0507 18:34:17.968714    8396 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  7 18:15 /usr/share/ca-certificates/99922.pem
	I0507 18:34:17.981379    8396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/99922.pem
	I0507 18:34:18.001710    8396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/99922.pem /etc/ssl/certs/3ec20f2e.0"
	I0507 18:34:18.030922    8396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0507 18:34:18.061028    8396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0507 18:34:18.071443    8396 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  7 18:01 /usr/share/ca-certificates/minikubeCA.pem
	I0507 18:34:18.083945    8396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0507 18:34:18.098901    8396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0507 18:34:18.125427    8396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9992.pem && ln -fs /usr/share/ca-certificates/9992.pem /etc/ssl/certs/9992.pem"
	I0507 18:34:18.152762    8396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9992.pem
	I0507 18:34:18.159930    8396 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  7 18:15 /usr/share/ca-certificates/9992.pem
	I0507 18:34:18.168357    8396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9992.pem
	I0507 18:34:18.185917    8396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9992.pem /etc/ssl/certs/51391683.0"
	I0507 18:34:18.210269    8396 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0507 18:34:18.216286    8396 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0507 18:34:18.216938    8396 kubeadm.go:391] StartCluster: {Name:ha-210800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clu
sterName:ha-210800 Namespace:default APIServerHAVIP:172.19.143.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.132.69 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0507 18:34:18.223970    8396 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0507 18:34:18.257737    8396 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0507 18:34:18.285132    8396 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0507 18:34:18.308705    8396 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0507 18:34:18.324760    8396 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0507 18:34:18.324760    8396 kubeadm.go:156] found existing configuration files:
	
	I0507 18:34:18.334009    8396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0507 18:34:18.349398    8396 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0507 18:34:18.359781    8396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0507 18:34:18.385849    8396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0507 18:34:18.401809    8396 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0507 18:34:18.409495    8396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0507 18:34:18.437518    8396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0507 18:34:18.452800    8396 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0507 18:34:18.461825    8396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0507 18:34:18.487952    8396 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0507 18:34:18.502201    8396 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0507 18:34:18.511217    8396 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0507 18:34:18.527704    8396 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0507 18:34:18.853530    8396 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0507 18:34:31.670158    8396 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0507 18:34:31.670158    8396 kubeadm.go:309] [preflight] Running pre-flight checks
	I0507 18:34:31.670158    8396 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0507 18:34:31.671615    8396 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0507 18:34:31.671858    8396 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0507 18:34:31.671858    8396 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0507 18:34:31.674826    8396 out.go:204]   - Generating certificates and keys ...
	I0507 18:34:31.674973    8396 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0507 18:34:31.675121    8396 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0507 18:34:31.675512    8396 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0507 18:34:31.675512    8396 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0507 18:34:31.675512    8396 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0507 18:34:31.675512    8396 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0507 18:34:31.676042    8396 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0507 18:34:31.676359    8396 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-210800 localhost] and IPs [172.19.132.69 127.0.0.1 ::1]
	I0507 18:34:31.676496    8396 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0507 18:34:31.676544    8396 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-210800 localhost] and IPs [172.19.132.69 127.0.0.1 ::1]
	I0507 18:34:31.676544    8396 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0507 18:34:31.676544    8396 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0507 18:34:31.677078    8396 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0507 18:34:31.677172    8396 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0507 18:34:31.677505    8396 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0507 18:34:31.677505    8396 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0507 18:34:31.677505    8396 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0507 18:34:31.677505    8396 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0507 18:34:31.678122    8396 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0507 18:34:31.678122    8396 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0507 18:34:31.678122    8396 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0507 18:34:31.680943    8396 out.go:204]   - Booting up control plane ...
	I0507 18:34:31.681153    8396 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0507 18:34:31.681313    8396 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0507 18:34:31.681510    8396 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0507 18:34:31.681687    8396 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0507 18:34:31.681687    8396 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0507 18:34:31.681687    8396 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0507 18:34:31.682109    8396 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0507 18:34:31.682109    8396 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0507 18:34:31.682650    8396 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002741282s
	I0507 18:34:31.682909    8396 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0507 18:34:31.683166    8396 kubeadm.go:309] [api-check] The API server is healthy after 7.003581288s
	I0507 18:34:31.683468    8396 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0507 18:34:31.683756    8396 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0507 18:34:31.683930    8396 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0507 18:34:31.684258    8396 kubeadm.go:309] [mark-control-plane] Marking the node ha-210800 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0507 18:34:31.684313    8396 kubeadm.go:309] [bootstrap-token] Using token: wq75wp.g5obqxh3w2h2uzc4
	I0507 18:34:31.687258    8396 out.go:204]   - Configuring RBAC rules ...
	I0507 18:34:31.687324    8396 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0507 18:34:31.687324    8396 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0507 18:34:31.687959    8396 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0507 18:34:31.687959    8396 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0507 18:34:31.687959    8396 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0507 18:34:31.688606    8396 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0507 18:34:31.688606    8396 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0507 18:34:31.688606    8396 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0507 18:34:31.689191    8396 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0507 18:34:31.689191    8396 kubeadm.go:309] 
	I0507 18:34:31.689191    8396 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0507 18:34:31.689191    8396 kubeadm.go:309] 
	I0507 18:34:31.689191    8396 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0507 18:34:31.689191    8396 kubeadm.go:309] 
	I0507 18:34:31.689191    8396 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0507 18:34:31.689191    8396 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0507 18:34:31.689191    8396 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0507 18:34:31.689191    8396 kubeadm.go:309] 
	I0507 18:34:31.689191    8396 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0507 18:34:31.689191    8396 kubeadm.go:309] 
	I0507 18:34:31.689191    8396 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0507 18:34:31.689191    8396 kubeadm.go:309] 
	I0507 18:34:31.689191    8396 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0507 18:34:31.690193    8396 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0507 18:34:31.690193    8396 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0507 18:34:31.690193    8396 kubeadm.go:309] 
	I0507 18:34:31.690193    8396 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0507 18:34:31.690193    8396 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0507 18:34:31.690193    8396 kubeadm.go:309] 
	I0507 18:34:31.690193    8396 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token wq75wp.g5obqxh3w2h2uzc4 \
	I0507 18:34:31.690193    8396 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:931f752ca063cc161db9d00a66e1e235f9a673b9dc0e49228e9ec99d810de7b1 \
	I0507 18:34:31.691217    8396 kubeadm.go:309] 	--control-plane 
	I0507 18:34:31.691217    8396 kubeadm.go:309] 
	I0507 18:34:31.691217    8396 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0507 18:34:31.691217    8396 kubeadm.go:309] 
	I0507 18:34:31.691217    8396 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token wq75wp.g5obqxh3w2h2uzc4 \
	I0507 18:34:31.691795    8396 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:931f752ca063cc161db9d00a66e1e235f9a673b9dc0e49228e9ec99d810de7b1 
	I0507 18:34:31.691907    8396 cni.go:84] Creating CNI manager for ""
	I0507 18:34:31.691907    8396 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0507 18:34:31.694919    8396 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0507 18:34:31.704618    8396 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0507 18:34:31.712587    8396 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0507 18:34:31.712587    8396 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0507 18:34:31.760956    8396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0507 18:34:32.223224    8396 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0507 18:34:32.235001    8396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-210800 minikube.k8s.io/updated_at=2024_05_07T18_34_32_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=a2bee053733709aad5480b65159f65519e411d9f minikube.k8s.io/name=ha-210800 minikube.k8s.io/primary=true
	I0507 18:34:32.235550    8396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:34:32.249883    8396 ops.go:34] apiserver oom_adj: -16
	I0507 18:34:32.443977    8396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:34:32.953029    8396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:34:33.455184    8396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:34:33.954748    8396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:34:34.458483    8396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:34:34.960374    8396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:34:35.456448    8396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:34:35.949035    8396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:34:36.445521    8396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:34:36.953207    8396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:34:37.454167    8396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:34:37.957439    8396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:34:38.462131    8396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:34:38.959140    8396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:34:39.457870    8396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:34:39.944282    8396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:34:40.448077    8396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:34:40.944537    8396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:34:41.446618    8396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:34:41.951533    8396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:34:42.456184    8396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:34:42.958353    8396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:34:43.444040    8396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:34:43.946593    8396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:34:44.450533    8396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 18:34:44.626392    8396 kubeadm.go:1107] duration metric: took 12.4022566s to wait for elevateKubeSystemPrivileges
	W0507 18:34:44.626544    8396 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0507 18:34:44.626544    8396 kubeadm.go:393] duration metric: took 26.4077927s to StartCluster
	I0507 18:34:44.626544    8396 settings.go:142] acquiring lock: {Name:mk66ab2e0bae08b477c4ed9caa26e688e6ce3248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 18:34:44.626904    8396 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0507 18:34:44.627734    8396 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 18:34:44.629650    8396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0507 18:34:44.629650    8396 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.19.132.69 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0507 18:34:44.629650    8396 start.go:240] waiting for startup goroutines ...
	I0507 18:34:44.629650    8396 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0507 18:34:44.629650    8396 addons.go:69] Setting storage-provisioner=true in profile "ha-210800"
	I0507 18:34:44.629650    8396 addons.go:69] Setting default-storageclass=true in profile "ha-210800"
	I0507 18:34:44.629650    8396 addons.go:234] Setting addon storage-provisioner=true in "ha-210800"
	I0507 18:34:44.629650    8396 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-210800"
	I0507 18:34:44.630261    8396 host.go:66] Checking if "ha-210800" exists ...
	I0507 18:34:44.630261    8396 config.go:182] Loaded profile config "ha-210800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 18:34:44.631155    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800 ).state
	I0507 18:34:44.631802    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800 ).state
	I0507 18:34:44.790548    8396 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.19.128.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0507 18:34:45.118544    8396 start.go:946] {"host.minikube.internal": 172.19.128.1} host record injected into CoreDNS's ConfigMap
	I0507 18:34:46.723769    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:34:46.724090    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:34:46.725054    8396 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0507 18:34:46.725302    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:34:46.725302    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:34:46.728032    8396 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0507 18:34:46.725479    8396 kapi.go:59] client config for ha-210800: &rest.Config{Host:"https://172.19.143.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-210800\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-210800\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2655b00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0507 18:34:46.729454    8396 cert_rotation.go:137] Starting client certificate rotation controller
	I0507 18:34:46.729870    8396 addons.go:234] Setting addon default-storageclass=true in "ha-210800"
	I0507 18:34:46.730386    8396 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0507 18:34:46.730386    8396 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0507 18:34:46.730386    8396 host.go:66] Checking if "ha-210800" exists ...
	I0507 18:34:46.730386    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800 ).state
	I0507 18:34:46.731457    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800 ).state
	I0507 18:34:48.808619    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:34:48.808619    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:34:48.808619    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800 ).networkadapters[0]).ipaddresses[0]
	I0507 18:34:48.873739    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:34:48.874544    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:34:48.874602    8396 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0507 18:34:48.874602    8396 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0507 18:34:48.874730    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800 ).state
	I0507 18:34:50.939859    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:34:50.939859    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:34:50.939859    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800 ).networkadapters[0]).ipaddresses[0]
	I0507 18:34:51.324481    8396 main.go:141] libmachine: [stdout =====>] : 172.19.132.69
	
	I0507 18:34:51.324481    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:34:51.324962    8396 sshutil.go:53] new ssh client: &{IP:172.19.132.69 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800\id_rsa Username:docker}
	I0507 18:34:51.489552    8396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0507 18:34:53.300330    8396 main.go:141] libmachine: [stdout =====>] : 172.19.132.69
	
	I0507 18:34:53.301103    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:34:53.301393    8396 sshutil.go:53] new ssh client: &{IP:172.19.132.69 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800\id_rsa Username:docker}
	I0507 18:34:53.448203    8396 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0507 18:34:53.588179    8396 round_trippers.go:463] GET https://172.19.143.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0507 18:34:53.588179    8396 round_trippers.go:469] Request Headers:
	I0507 18:34:53.588179    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:34:53.588293    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:34:53.599611    8396 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0507 18:34:53.601332    8396 round_trippers.go:463] PUT https://172.19.143.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0507 18:34:53.601390    8396 round_trippers.go:469] Request Headers:
	I0507 18:34:53.601390    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:34:53.601390    8396 round_trippers.go:473]     Content-Type: application/json
	I0507 18:34:53.601390    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:34:53.609600    8396 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0507 18:34:53.611561    8396 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0507 18:34:53.616552    8396 addons.go:505] duration metric: took 8.9862837s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0507 18:34:53.616552    8396 start.go:245] waiting for cluster config update ...
	I0507 18:34:53.616552    8396 start.go:254] writing updated cluster config ...
	I0507 18:34:53.618565    8396 out.go:177] 
	I0507 18:34:53.628560    8396 config.go:182] Loaded profile config "ha-210800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 18:34:53.629550    8396 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\config.json ...
	I0507 18:34:53.633562    8396 out.go:177] * Starting "ha-210800-m02" control-plane node in "ha-210800" cluster
	I0507 18:34:53.637573    8396 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0507 18:34:53.638009    8396 cache.go:56] Caching tarball of preloaded images
	I0507 18:34:53.638009    8396 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0507 18:34:53.638009    8396 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0507 18:34:53.638009    8396 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\config.json ...
	I0507 18:34:53.642353    8396 start.go:360] acquireMachinesLock for ha-210800-m02: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 18:34:53.642466    8396 start.go:364] duration metric: took 56.5µs to acquireMachinesLock for "ha-210800-m02"
	I0507 18:34:53.642466    8396 start.go:93] Provisioning new machine with config: &{Name:ha-210800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.0 ClusterName:ha-210800 Namespace:default APIServerHAVIP:172.19.143.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.132.69 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0507 18:34:53.642466    8396 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0507 18:34:53.647503    8396 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0507 18:34:53.648256    8396 start.go:159] libmachine.API.Create for "ha-210800" (driver="hyperv")
	I0507 18:34:53.648256    8396 client.go:168] LocalClient.Create starting
	I0507 18:34:53.648256    8396 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0507 18:34:53.648909    8396 main.go:141] libmachine: Decoding PEM data...
	I0507 18:34:53.648909    8396 main.go:141] libmachine: Parsing certificate...
	I0507 18:34:53.649026    8396 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0507 18:34:53.649274    8396 main.go:141] libmachine: Decoding PEM data...
	I0507 18:34:53.649274    8396 main.go:141] libmachine: Parsing certificate...
	I0507 18:34:53.649423    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0507 18:34:55.302176    8396 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0507 18:34:55.302176    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:34:55.302176    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0507 18:34:56.875180    8396 main.go:141] libmachine: [stdout =====>] : False
	
	I0507 18:34:56.875180    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:34:56.875886    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0507 18:34:58.251062    8396 main.go:141] libmachine: [stdout =====>] : True
	
	I0507 18:34:58.251062    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:34:58.251062    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0507 18:35:01.464890    8396 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0507 18:35:01.464890    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:35:01.467634    8396 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso...
	I0507 18:35:01.799387    8396 main.go:141] libmachine: Creating SSH key...
	I0507 18:35:01.995893    8396 main.go:141] libmachine: Creating VM...
	I0507 18:35:01.995893    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0507 18:35:04.522097    8396 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0507 18:35:04.522193    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:35:04.522268    8396 main.go:141] libmachine: Using switch "Default Switch"
	I0507 18:35:04.522383    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0507 18:35:06.118344    8396 main.go:141] libmachine: [stdout =====>] : True
	
	I0507 18:35:06.118344    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:35:06.118429    8396 main.go:141] libmachine: Creating VHD
	I0507 18:35:06.118547    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0507 18:35:09.615833    8396 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 1950C92D-8A1C-4003-BE25-8D22A31CD17E
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0507 18:35:09.615995    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:35:09.615995    8396 main.go:141] libmachine: Writing magic tar header
	I0507 18:35:09.616065    8396 main.go:141] libmachine: Writing SSH key tar header
	I0507 18:35:09.623873    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0507 18:35:12.572870    8396 main.go:141] libmachine: [stdout =====>] : 
	I0507 18:35:12.573110    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:35:12.573110    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800-m02\disk.vhd' -SizeBytes 20000MB
	I0507 18:35:14.901209    8396 main.go:141] libmachine: [stdout =====>] : 
	I0507 18:35:14.901209    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:35:14.901356    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-210800-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0507 18:35:18.087239    8396 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-210800-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0507 18:35:18.087239    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:35:18.087239    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-210800-m02 -DynamicMemoryEnabled $false
	I0507 18:35:20.055236    8396 main.go:141] libmachine: [stdout =====>] : 
	I0507 18:35:20.055236    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:35:20.056282    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-210800-m02 -Count 2
	I0507 18:35:22.026297    8396 main.go:141] libmachine: [stdout =====>] : 
	I0507 18:35:22.026297    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:35:22.026588    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-210800-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800-m02\boot2docker.iso'
	I0507 18:35:24.346541    8396 main.go:141] libmachine: [stdout =====>] : 
	I0507 18:35:24.346541    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:35:24.346541    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-210800-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800-m02\disk.vhd'
	I0507 18:35:26.735238    8396 main.go:141] libmachine: [stdout =====>] : 
	I0507 18:35:26.735294    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:35:26.735294    8396 main.go:141] libmachine: Starting VM...
	I0507 18:35:26.735294    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-210800-m02
	I0507 18:35:29.528101    8396 main.go:141] libmachine: [stdout =====>] : 
	I0507 18:35:29.528101    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:35:29.528101    8396 main.go:141] libmachine: Waiting for host to start...
	I0507 18:35:29.528972    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
	I0507 18:35:31.570711    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:35:31.570743    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:35:31.570796    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 18:35:33.845737    8396 main.go:141] libmachine: [stdout =====>] : 
	I0507 18:35:33.845737    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:35:34.853209    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
	I0507 18:35:36.824586    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:35:36.824586    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:35:36.824790    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 18:35:39.071254    8396 main.go:141] libmachine: [stdout =====>] : 
	I0507 18:35:39.071254    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:35:40.077071    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
	I0507 18:35:42.046289    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:35:42.046289    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:35:42.046399    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 18:35:44.288312    8396 main.go:141] libmachine: [stdout =====>] : 
	I0507 18:35:44.288466    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:35:45.289093    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
	I0507 18:35:47.242202    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:35:47.242202    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:35:47.242202    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 18:35:49.498242    8396 main.go:141] libmachine: [stdout =====>] : 
	I0507 18:35:49.498242    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:35:50.499395    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
	I0507 18:35:52.478319    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:35:52.478319    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:35:52.478319    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 18:35:54.785362    8396 main.go:141] libmachine: [stdout =====>] : 172.19.135.87
	
	I0507 18:35:54.785362    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:35:54.785517    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
	I0507 18:35:56.701849    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:35:56.702148    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:35:56.702148    8396 machine.go:94] provisionDockerMachine start ...
	I0507 18:35:56.702313    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
	I0507 18:35:58.633010    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:35:58.633010    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:35:58.633098    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 18:36:00.900889    8396 main.go:141] libmachine: [stdout =====>] : 172.19.135.87
	
	I0507 18:36:00.900889    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:36:00.905979    8396 main.go:141] libmachine: Using SSH client type: native
	I0507 18:36:00.916629    8396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.135.87 22 <nil> <nil>}
	I0507 18:36:00.916629    8396 main.go:141] libmachine: About to run SSH command:
	hostname
	I0507 18:36:01.047128    8396 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0507 18:36:01.047128    8396 buildroot.go:166] provisioning hostname "ha-210800-m02"
	I0507 18:36:01.047128    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
	I0507 18:36:03.019255    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:36:03.019255    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:36:03.019255    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 18:36:05.386249    8396 main.go:141] libmachine: [stdout =====>] : 172.19.135.87
	
	I0507 18:36:05.386249    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:36:05.390937    8396 main.go:141] libmachine: Using SSH client type: native
	I0507 18:36:05.391216    8396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.135.87 22 <nil> <nil>}
	I0507 18:36:05.391216    8396 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-210800-m02 && echo "ha-210800-m02" | sudo tee /etc/hostname
	I0507 18:36:05.540314    8396 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-210800-m02
	
	I0507 18:36:05.540424    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
	I0507 18:36:07.498257    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:36:07.498317    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:36:07.498317    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 18:36:09.814825    8396 main.go:141] libmachine: [stdout =====>] : 172.19.135.87
	
	I0507 18:36:09.814825    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:36:09.822974    8396 main.go:141] libmachine: Using SSH client type: native
	I0507 18:36:09.823234    8396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.135.87 22 <nil> <nil>}
	I0507 18:36:09.823234    8396 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-210800-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-210800-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-210800-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0507 18:36:09.967390    8396 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0507 18:36:09.967390    8396 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0507 18:36:09.967390    8396 buildroot.go:174] setting up certificates
	I0507 18:36:09.967390    8396 provision.go:84] configureAuth start
	I0507 18:36:09.967390    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
	I0507 18:36:11.900723    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:36:11.900723    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:36:11.900723    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 18:36:14.162863    8396 main.go:141] libmachine: [stdout =====>] : 172.19.135.87
	
	I0507 18:36:14.162984    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:36:14.162984    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
	I0507 18:36:16.063190    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:36:16.063190    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:36:16.063190    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 18:36:18.401243    8396 main.go:141] libmachine: [stdout =====>] : 172.19.135.87
	
	I0507 18:36:18.401243    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:36:18.401243    8396 provision.go:143] copyHostCerts
	I0507 18:36:18.401243    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0507 18:36:18.401243    8396 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0507 18:36:18.401243    8396 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0507 18:36:18.401853    8396 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0507 18:36:18.402476    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0507 18:36:18.402476    8396 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0507 18:36:18.402476    8396 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0507 18:36:18.402476    8396 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0507 18:36:18.403816    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0507 18:36:18.403816    8396 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0507 18:36:18.403816    8396 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0507 18:36:18.403816    8396 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0507 18:36:18.404487    8396 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-210800-m02 san=[127.0.0.1 172.19.135.87 ha-210800-m02 localhost minikube]
	I0507 18:36:18.717435    8396 provision.go:177] copyRemoteCerts
	I0507 18:36:18.725073    8396 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0507 18:36:18.725073    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
	I0507 18:36:20.663851    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:36:20.664496    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:36:20.664496    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 18:36:22.996756    8396 main.go:141] libmachine: [stdout =====>] : 172.19.135.87
	
	I0507 18:36:22.996756    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:36:22.997304    8396 sshutil.go:53] new ssh client: &{IP:172.19.135.87 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800-m02\id_rsa Username:docker}
	I0507 18:36:23.100348    8396 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.3749722s)
	I0507 18:36:23.100348    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0507 18:36:23.100348    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0507 18:36:23.145451    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0507 18:36:23.146414    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0507 18:36:23.198247    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0507 18:36:23.198247    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0507 18:36:23.242848    8396 provision.go:87] duration metric: took 13.2745419s to configureAuth
	I0507 18:36:23.242848    8396 buildroot.go:189] setting minikube options for container-runtime
	I0507 18:36:23.243467    8396 config.go:182] Loaded profile config "ha-210800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 18:36:23.243467    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
	I0507 18:36:25.184681    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:36:25.184681    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:36:25.185320    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 18:36:27.491096    8396 main.go:141] libmachine: [stdout =====>] : 172.19.135.87
	
	I0507 18:36:27.491169    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:36:27.496471    8396 main.go:141] libmachine: Using SSH client type: native
	I0507 18:36:27.496471    8396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.135.87 22 <nil> <nil>}
	I0507 18:36:27.496471    8396 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0507 18:36:27.624201    8396 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0507 18:36:27.624248    8396 buildroot.go:70] root file system type: tmpfs
	I0507 18:36:27.624248    8396 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0507 18:36:27.624248    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
	I0507 18:36:29.511759    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:36:29.511759    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:36:29.511759    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 18:36:31.785015    8396 main.go:141] libmachine: [stdout =====>] : 172.19.135.87
	
	I0507 18:36:31.785779    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:36:31.789385    8396 main.go:141] libmachine: Using SSH client type: native
	I0507 18:36:31.789385    8396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.135.87 22 <nil> <nil>}
	I0507 18:36:31.790011    8396 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.19.132.69"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0507 18:36:31.957966    8396 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.19.132.69
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0507 18:36:31.958091    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
	I0507 18:36:33.813960    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:36:33.814405    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:36:33.814454    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 18:36:36.087126    8396 main.go:141] libmachine: [stdout =====>] : 172.19.135.87
	
	I0507 18:36:36.087126    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:36:36.091729    8396 main.go:141] libmachine: Using SSH client type: native
	I0507 18:36:36.092253    8396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.135.87 22 <nil> <nil>}
	I0507 18:36:36.092253    8396 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0507 18:36:38.165921    8396 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0507 18:36:38.165921    8396 machine.go:97] duration metric: took 41.4608334s to provisionDockerMachine
	I0507 18:36:38.166025    8396 client.go:171] duration metric: took 1m44.5104599s to LocalClient.Create
	I0507 18:36:38.166025    8396 start.go:167] duration metric: took 1m44.5105636s to libmachine.API.Create "ha-210800"
	I0507 18:36:38.166025    8396 start.go:293] postStartSetup for "ha-210800-m02" (driver="hyperv")
	I0507 18:36:38.166025    8396 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0507 18:36:38.174369    8396 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0507 18:36:38.175374    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
	I0507 18:36:40.063971    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:36:40.064139    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:36:40.064139    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 18:36:42.346137    8396 main.go:141] libmachine: [stdout =====>] : 172.19.135.87
	
	I0507 18:36:42.346354    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:36:42.346732    8396 sshutil.go:53] new ssh client: &{IP:172.19.135.87 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800-m02\id_rsa Username:docker}
	I0507 18:36:42.456065    8396 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.2814012s)
	I0507 18:36:42.467287    8396 ssh_runner.go:195] Run: cat /etc/os-release
	I0507 18:36:42.473746    8396 info.go:137] Remote host: Buildroot 2023.02.9
	I0507 18:36:42.473746    8396 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0507 18:36:42.473746    8396 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0507 18:36:42.473746    8396 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem -> 99922.pem in /etc/ssl/certs
	I0507 18:36:42.473746    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem -> /etc/ssl/certs/99922.pem
	I0507 18:36:42.483093    8396 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0507 18:36:42.504052    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem --> /etc/ssl/certs/99922.pem (1708 bytes)
	I0507 18:36:42.547781    8396 start.go:296] duration metric: took 4.3814539s for postStartSetup
	I0507 18:36:42.549725    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
	I0507 18:36:44.454070    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:36:44.454070    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:36:44.455094    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 18:36:46.707416    8396 main.go:141] libmachine: [stdout =====>] : 172.19.135.87
	
	I0507 18:36:46.708357    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:36:46.708357    8396 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\config.json ...
	I0507 18:36:46.709511    8396 start.go:128] duration metric: took 1m53.0592505s to createHost
	I0507 18:36:46.710107    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
	I0507 18:36:48.586410    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:36:48.587408    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:36:48.587520    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 18:36:50.829564    8396 main.go:141] libmachine: [stdout =====>] : 172.19.135.87
	
	I0507 18:36:50.829564    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:36:50.834093    8396 main.go:141] libmachine: Using SSH client type: native
	I0507 18:36:50.834620    8396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.135.87 22 <nil> <nil>}
	I0507 18:36:50.834620    8396 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0507 18:36:50.959427    8396 main.go:141] libmachine: SSH cmd err, output: <nil>: 1715107011.165850792
	
	I0507 18:36:50.959427    8396 fix.go:216] guest clock: 1715107011.165850792
	I0507 18:36:50.959427    8396 fix.go:229] Guest: 2024-05-07 18:36:51.165850792 +0000 UTC Remote: 2024-05-07 18:36:46.710028 +0000 UTC m=+306.542564401 (delta=4.455822792s)
	I0507 18:36:50.959427    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
	I0507 18:36:52.848185    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:36:52.848185    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:36:52.848185    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 18:36:55.122226    8396 main.go:141] libmachine: [stdout =====>] : 172.19.135.87
	
	I0507 18:36:55.122226    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:36:55.126779    8396 main.go:141] libmachine: Using SSH client type: native
	I0507 18:36:55.127304    8396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.135.87 22 <nil> <nil>}
	I0507 18:36:55.127341    8396 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1715107010
	I0507 18:36:55.259970    8396 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue May  7 18:36:50 UTC 2024
	
	I0507 18:36:55.259970    8396 fix.go:236] clock set: Tue May  7 18:36:50 UTC 2024
	 (err=<nil>)
	I0507 18:36:55.259970    8396 start.go:83] releasing machines lock for "ha-210800-m02", held for 2m1.60912s
	I0507 18:36:55.260991    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
	I0507 18:36:57.177701    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:36:57.178380    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:36:57.178491    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 18:36:59.484964    8396 main.go:141] libmachine: [stdout =====>] : 172.19.135.87
	
	I0507 18:36:59.485453    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:36:59.499630    8396 out.go:177] * Found network options:
	I0507 18:36:59.503173    8396 out.go:177]   - NO_PROXY=172.19.132.69
	W0507 18:36:59.505237    8396 proxy.go:119] fail to check proxy env: Error ip not in block
	I0507 18:36:59.507381    8396 out.go:177]   - NO_PROXY=172.19.132.69
	W0507 18:36:59.510059    8396 proxy.go:119] fail to check proxy env: Error ip not in block
	W0507 18:36:59.511746    8396 proxy.go:119] fail to check proxy env: Error ip not in block
	I0507 18:36:59.513647    8396 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0507 18:36:59.513792    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
	I0507 18:36:59.521577    8396 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0507 18:36:59.521577    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
	I0507 18:37:01.482424    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:37:01.482424    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:37:01.482511    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 18:37:01.510612    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:37:01.510612    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:37:01.510612    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 18:37:03.867682    8396 main.go:141] libmachine: [stdout =====>] : 172.19.135.87
	
	I0507 18:37:03.867682    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:37:03.868908    8396 sshutil.go:53] new ssh client: &{IP:172.19.135.87 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800-m02\id_rsa Username:docker}
	I0507 18:37:03.891314    8396 main.go:141] libmachine: [stdout =====>] : 172.19.135.87
	
	I0507 18:37:03.892357    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:37:03.892682    8396 sshutil.go:53] new ssh client: &{IP:172.19.135.87 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800-m02\id_rsa Username:docker}
	I0507 18:37:04.043891    8396 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.5219212s)
	I0507 18:37:04.043891    8396 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.5298603s)
	W0507 18:37:04.043969    8396 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0507 18:37:04.053902    8396 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0507 18:37:04.082542    8396 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0507 18:37:04.082670    8396 start.go:494] detecting cgroup driver to use...
	I0507 18:37:04.082954    8396 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0507 18:37:04.126289    8396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0507 18:37:04.153715    8396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0507 18:37:04.172629    8396 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0507 18:37:04.181737    8396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0507 18:37:04.207003    8396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0507 18:37:04.233343    8396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0507 18:37:04.259767    8396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0507 18:37:04.287222    8396 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0507 18:37:04.314043    8396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0507 18:37:04.340933    8396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0507 18:37:04.369680    8396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0507 18:37:04.397010    8396 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0507 18:37:04.421308    8396 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0507 18:37:04.445548    8396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 18:37:04.629394    8396 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0507 18:37:04.664712    8396 start.go:494] detecting cgroup driver to use...
	I0507 18:37:04.678930    8396 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0507 18:37:04.716909    8396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0507 18:37:04.747292    8396 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0507 18:37:04.784326    8396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0507 18:37:04.816252    8396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0507 18:37:04.847205    8396 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0507 18:37:04.899432    8396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0507 18:37:04.921082    8396 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0507 18:37:04.965108    8396 ssh_runner.go:195] Run: which cri-dockerd
	I0507 18:37:04.979458    8396 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0507 18:37:04.997174    8396 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0507 18:37:05.035246    8396 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0507 18:37:05.224435    8396 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0507 18:37:05.411129    8396 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0507 18:37:05.411459    8396 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0507 18:37:05.451260    8396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 18:37:05.638005    8396 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0507 18:37:08.129421    8396 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.49117s)
	I0507 18:37:08.140666    8396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0507 18:37:08.169675    8396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0507 18:37:08.206191    8396 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0507 18:37:08.387393    8396 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0507 18:37:08.568759    8396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 18:37:08.751042    8396 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0507 18:37:08.788382    8396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0507 18:37:08.820363    8396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 18:37:09.003468    8396 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0507 18:37:09.098218    8396 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0507 18:37:09.105844    8396 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0507 18:37:09.114636    8396 start.go:562] Will wait 60s for crictl version
	I0507 18:37:09.122774    8396 ssh_runner.go:195] Run: which crictl
	I0507 18:37:09.137755    8396 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0507 18:37:09.187114    8396 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0507 18:37:09.193910    8396 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0507 18:37:09.227461    8396 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0507 18:37:09.257597    8396 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0507 18:37:09.260632    8396 out.go:177]   - env NO_PROXY=172.19.132.69
	I0507 18:37:09.262597    8396 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0507 18:37:09.266594    8396 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0507 18:37:09.266594    8396 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0507 18:37:09.266594    8396 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0507 18:37:09.266594    8396 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:a3:a5:4f Flags:up|broadcast|multicast|running}
	I0507 18:37:09.269601    8396 ip.go:210] interface addr: fe80::1edb:f5fd:c218:d8d2/64
	I0507 18:37:09.269601    8396 ip.go:210] interface addr: 172.19.128.1/20
	I0507 18:37:09.277610    8396 ssh_runner.go:195] Run: grep 172.19.128.1	host.minikube.internal$ /etc/hosts
	I0507 18:37:09.284239    8396 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.128.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0507 18:37:09.307355    8396 mustload.go:65] Loading cluster: ha-210800
	I0507 18:37:09.307920    8396 config.go:182] Loaded profile config "ha-210800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 18:37:09.308395    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800 ).state
	I0507 18:37:11.165135    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:37:11.165891    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:37:11.165891    8396 host.go:66] Checking if "ha-210800" exists ...
	I0507 18:37:11.166448    8396 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800 for IP: 172.19.135.87
	I0507 18:37:11.166448    8396 certs.go:194] generating shared ca certs ...
	I0507 18:37:11.166521    8396 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 18:37:11.167120    8396 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0507 18:37:11.167502    8396 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0507 18:37:11.167694    8396 certs.go:256] generating profile certs ...
	I0507 18:37:11.168199    8396 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\client.key
	I0507 18:37:11.168333    8396 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.key.8baf5605
	I0507 18:37:11.168399    8396 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.crt.8baf5605 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.19.132.69 172.19.135.87 172.19.143.254]
	I0507 18:37:11.318887    8396 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.crt.8baf5605 ...
	I0507 18:37:11.318887    8396 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.crt.8baf5605: {Name:mk35e8980a1be180b9dd44f1c2ba2dbe349f4b0c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 18:37:11.320502    8396 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.key.8baf5605 ...
	I0507 18:37:11.320502    8396 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.key.8baf5605: {Name:mk357a8d7b50038f91b10e63854b4690ca652ef1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 18:37:11.321286    8396 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.crt.8baf5605 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.crt
	I0507 18:37:11.333498    8396 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.key.8baf5605 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.key
	I0507 18:37:11.336089    8396 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\proxy-client.key
	I0507 18:37:11.336089    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0507 18:37:11.336089    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0507 18:37:11.336089    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0507 18:37:11.336089    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0507 18:37:11.336089    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0507 18:37:11.336089    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0507 18:37:11.336089    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0507 18:37:11.336089    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0507 18:37:11.337616    8396 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\9992.pem (1338 bytes)
	W0507 18:37:11.337999    8396 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\9992_empty.pem, impossibly tiny 0 bytes
	I0507 18:37:11.337999    8396 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0507 18:37:11.338225    8396 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0507 18:37:11.338613    8396 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0507 18:37:11.338843    8396 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0507 18:37:11.339433    8396 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem (1708 bytes)
	I0507 18:37:11.339627    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\9992.pem -> /usr/share/ca-certificates/9992.pem
	I0507 18:37:11.339825    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem -> /usr/share/ca-certificates/99922.pem
	I0507 18:37:11.339942    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0507 18:37:11.340274    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800 ).state
	I0507 18:37:13.210015    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:37:13.210436    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:37:13.210532    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800 ).networkadapters[0]).ipaddresses[0]
	I0507 18:37:15.557119    8396 main.go:141] libmachine: [stdout =====>] : 172.19.132.69
	
	I0507 18:37:15.557756    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:37:15.557818    8396 sshutil.go:53] new ssh client: &{IP:172.19.132.69 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800\id_rsa Username:docker}
	I0507 18:37:15.655043    8396 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0507 18:37:15.663299    8396 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0507 18:37:15.695500    8396 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0507 18:37:15.703402    8396 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0507 18:37:15.734444    8396 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0507 18:37:15.741330    8396 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0507 18:37:15.774955    8396 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0507 18:37:15.781555    8396 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0507 18:37:15.807296    8396 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0507 18:37:15.813150    8396 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0507 18:37:15.840423    8396 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0507 18:37:15.847078    8396 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0507 18:37:15.866708    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0507 18:37:15.921119    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0507 18:37:15.963844    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0507 18:37:16.005197    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0507 18:37:16.048372    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0507 18:37:16.091763    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0507 18:37:16.133774    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0507 18:37:16.176062    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0507 18:37:16.217463    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\9992.pem --> /usr/share/ca-certificates/9992.pem (1338 bytes)
	I0507 18:37:16.259673    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem --> /usr/share/ca-certificates/99922.pem (1708 bytes)
	I0507 18:37:16.301502    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0507 18:37:16.343924    8396 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0507 18:37:16.373370    8396 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0507 18:37:16.402362    8396 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0507 18:37:16.436385    8396 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0507 18:37:16.463732    8396 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0507 18:37:16.492226    8396 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0507 18:37:16.524147    8396 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0507 18:37:16.564003    8396 ssh_runner.go:195] Run: openssl version
	I0507 18:37:16.580874    8396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9992.pem && ln -fs /usr/share/ca-certificates/9992.pem /etc/ssl/certs/9992.pem"
	I0507 18:37:16.607027    8396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9992.pem
	I0507 18:37:16.612868    8396 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  7 18:15 /usr/share/ca-certificates/9992.pem
	I0507 18:37:16.623865    8396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9992.pem
	I0507 18:37:16.640722    8396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9992.pem /etc/ssl/certs/51391683.0"
	I0507 18:37:16.668111    8396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/99922.pem && ln -fs /usr/share/ca-certificates/99922.pem /etc/ssl/certs/99922.pem"
	I0507 18:37:16.698614    8396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/99922.pem
	I0507 18:37:16.705594    8396 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  7 18:15 /usr/share/ca-certificates/99922.pem
	I0507 18:37:16.716265    8396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/99922.pem
	I0507 18:37:16.732438    8396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/99922.pem /etc/ssl/certs/3ec20f2e.0"
	I0507 18:37:16.758374    8396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0507 18:37:16.786075    8396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0507 18:37:16.793262    8396 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  7 18:01 /usr/share/ca-certificates/minikubeCA.pem
	I0507 18:37:16.803030    8396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0507 18:37:16.817966    8396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0507 18:37:16.844006    8396 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0507 18:37:16.851097    8396 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0507 18:37:16.851097    8396 kubeadm.go:928] updating node {m02 172.19.135.87 8443 v1.30.0 docker true true} ...
	I0507 18:37:16.851097    8396 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-210800-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.135.87
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-210800 Namespace:default APIServerHAVIP:172.19.143.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0507 18:37:16.851628    8396 kube-vip.go:111] generating kube-vip config ...
	I0507 18:37:16.861303    8396 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0507 18:37:16.886603    8396 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0507 18:37:16.886752    8396 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.19.143.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0507 18:37:16.895923    8396 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0507 18:37:16.912700    8396 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0507 18:37:16.921310    8396 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0507 18:37:16.941484    8396 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm
	I0507 18:37:16.941691    8396 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet
	I0507 18:37:16.941795    8396 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl
	I0507 18:37:17.971653    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0507 18:37:17.980284    8396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0507 18:37:17.988204    8396 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0507 18:37:17.988407    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0507 18:37:18.592076    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0507 18:37:18.600906    8396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0507 18:37:18.607343    8396 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0507 18:37:18.608345    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0507 18:37:19.351886    8396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0507 18:37:19.375900    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0507 18:37:19.384993    8396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0507 18:37:19.391883    8396 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0507 18:37:19.392115    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0507 18:37:19.991339    8396 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0507 18:37:20.007259    8396 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0507 18:37:20.035352    8396 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0507 18:37:20.064679    8396 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0507 18:37:20.105207    8396 ssh_runner.go:195] Run: grep 172.19.143.254	control-plane.minikube.internal$ /etc/hosts
	I0507 18:37:20.112978    8396 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.143.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0507 18:37:20.142887    8396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 18:37:20.327605    8396 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0507 18:37:20.490146    8396 host.go:66] Checking if "ha-210800" exists ...
	I0507 18:37:20.498513    8396 start.go:316] joinCluster: &{Name:ha-210800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clust
erName:ha-210800 Namespace:default APIServerHAVIP:172.19.143.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.132.69 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.135.87 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExp
iration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0507 18:37:20.498513    8396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0507 18:37:20.498513    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800 ).state
	I0507 18:37:22.406803    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:37:22.406803    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:37:22.406999    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800 ).networkadapters[0]).ipaddresses[0]
	I0507 18:37:24.714235    8396 main.go:141] libmachine: [stdout =====>] : 172.19.132.69
	
	I0507 18:37:24.714309    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:37:24.714654    8396 sshutil.go:53] new ssh client: &{IP:172.19.132.69 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800\id_rsa Username:docker}
	I0507 18:37:24.917515    8396 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0": (4.4186984s)
	I0507 18:37:24.917613    8396 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:172.19.135.87 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0507 18:37:24.917613    8396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token wju3ow.zru46704qlro3ubh --discovery-token-ca-cert-hash sha256:931f752ca063cc161db9d00a66e1e235f9a673b9dc0e49228e9ec99d810de7b1 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-210800-m02 --control-plane --apiserver-advertise-address=172.19.135.87 --apiserver-bind-port=8443"
	I0507 18:38:05.735195    8396 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token wju3ow.zru46704qlro3ubh --discovery-token-ca-cert-hash sha256:931f752ca063cc161db9d00a66e1e235f9a673b9dc0e49228e9ec99d810de7b1 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-210800-m02 --control-plane --apiserver-advertise-address=172.19.135.87 --apiserver-bind-port=8443": (40.8147749s)
	I0507 18:38:05.735195    8396 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0507 18:38:06.486435    8396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-210800-m02 minikube.k8s.io/updated_at=2024_05_07T18_38_06_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=a2bee053733709aad5480b65159f65519e411d9f minikube.k8s.io/name=ha-210800 minikube.k8s.io/primary=false
	I0507 18:38:06.660088    8396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-210800-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0507 18:38:06.809499    8396 start.go:318] duration metric: took 46.3078008s to joinCluster
	I0507 18:38:06.809697    8396 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.19.135.87 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0507 18:38:06.812704    8396 out.go:177] * Verifying Kubernetes components...
	I0507 18:38:06.810543    8396 config.go:182] Loaded profile config "ha-210800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 18:38:06.823813    8396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 18:38:07.164428    8396 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0507 18:38:07.211420    8396 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0507 18:38:07.211420    8396 kapi.go:59] client config for ha-210800: &rest.Config{Host:"https://172.19.143.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-210800\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-210800\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2655b00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0507 18:38:07.211420    8396 kubeadm.go:477] Overriding stale ClientConfig host https://172.19.143.254:8443 with https://172.19.132.69:8443
	I0507 18:38:07.212429    8396 node_ready.go:35] waiting up to 6m0s for node "ha-210800-m02" to be "Ready" ...
	I0507 18:38:07.212429    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:07.212429    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:07.212429    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:07.212429    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:07.236261    8396 round_trippers.go:574] Response Status: 200 OK in 23 milliseconds
	I0507 18:38:07.712907    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:07.713063    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:07.713063    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:07.713063    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:08.062513    8396 round_trippers.go:574] Response Status: 200 OK in 349 milliseconds
	I0507 18:38:08.220311    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:08.220530    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:08.220530    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:08.220530    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:08.226332    8396 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 18:38:08.714864    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:08.714944    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:08.714944    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:08.714944    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:08.719950    8396 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 18:38:09.222641    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:09.222641    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:09.222641    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:09.222641    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:09.227753    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:38:09.228892    8396 node_ready.go:53] node "ha-210800-m02" has status "Ready":"False"
	I0507 18:38:09.717031    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:09.717031    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:09.717031    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:09.717182    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:09.722336    8396 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 18:38:10.225740    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:10.225740    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:10.225740    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:10.225740    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:10.658343    8396 round_trippers.go:574] Response Status: 200 OK in 431 milliseconds
	I0507 18:38:10.713126    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:10.713126    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:10.713126    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:10.713126    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:10.716862    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:38:11.216552    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:11.216552    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:11.216552    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:11.216552    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:11.222895    8396 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0507 18:38:11.723679    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:11.723764    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:11.723764    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:11.723764    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:11.728123    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:38:11.729851    8396 node_ready.go:53] node "ha-210800-m02" has status "Ready":"False"
	I0507 18:38:12.228465    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:12.228465    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:12.228465    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:12.228465    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:12.235375    8396 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0507 18:38:12.727131    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:12.727247    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:12.727247    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:12.727247    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:12.732701    8396 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 18:38:13.216903    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:13.217109    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:13.217109    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:13.217109    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:13.225109    8396 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0507 18:38:13.716032    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:13.716032    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:13.716032    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:13.716032    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:13.721598    8396 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 18:38:14.216775    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:14.216775    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:14.216775    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:14.216775    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:14.229114    8396 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0507 18:38:14.229114    8396 node_ready.go:49] node "ha-210800-m02" has status "Ready":"True"
	I0507 18:38:14.229114    8396 node_ready.go:38] duration metric: took 7.0162034s for node "ha-210800-m02" to be "Ready" ...
	I0507 18:38:14.229705    8396 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0507 18:38:14.229705    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods
	I0507 18:38:14.229833    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:14.229833    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:14.229833    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:14.235019    8396 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 18:38:14.243661    8396 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-cr9nn" in "kube-system" namespace to be "Ready" ...
	I0507 18:38:14.243661    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-cr9nn
	I0507 18:38:14.243661    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:14.243661    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:14.243661    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:14.247500    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:38:14.248492    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800
	I0507 18:38:14.248492    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:14.248492    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:14.248492    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:14.253535    8396 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 18:38:14.254317    8396 pod_ready.go:92] pod "coredns-7db6d8ff4d-cr9nn" in "kube-system" namespace has status "Ready":"True"
	I0507 18:38:14.254317    8396 pod_ready.go:81] duration metric: took 10.6552ms for pod "coredns-7db6d8ff4d-cr9nn" in "kube-system" namespace to be "Ready" ...
	I0507 18:38:14.254317    8396 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-dxsqf" in "kube-system" namespace to be "Ready" ...
	I0507 18:38:14.254409    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-dxsqf
	I0507 18:38:14.254456    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:14.254456    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:14.254483    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:14.258002    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:38:14.259323    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800
	I0507 18:38:14.259323    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:14.259323    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:14.259323    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:14.263619    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:38:14.264441    8396 pod_ready.go:92] pod "coredns-7db6d8ff4d-dxsqf" in "kube-system" namespace has status "Ready":"True"
	I0507 18:38:14.264536    8396 pod_ready.go:81] duration metric: took 10.2192ms for pod "coredns-7db6d8ff4d-dxsqf" in "kube-system" namespace to be "Ready" ...
	I0507 18:38:14.264536    8396 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-210800" in "kube-system" namespace to be "Ready" ...
	I0507 18:38:14.264739    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800
	I0507 18:38:14.264756    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:14.264756    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:14.264756    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:14.266968    8396 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 18:38:14.268584    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800
	I0507 18:38:14.268623    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:14.268623    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:14.268653    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:14.271791    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:38:14.271791    8396 pod_ready.go:92] pod "etcd-ha-210800" in "kube-system" namespace has status "Ready":"True"
	I0507 18:38:14.271791    8396 pod_ready.go:81] duration metric: took 7.2539ms for pod "etcd-ha-210800" in "kube-system" namespace to be "Ready" ...
	I0507 18:38:14.271791    8396 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-210800-m02" in "kube-system" namespace to be "Ready" ...
	I0507 18:38:14.271791    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 18:38:14.271791    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:14.271791    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:14.271791    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:14.276375    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:38:14.276375    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:14.276913    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:14.276976    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:14.276976    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:14.280068    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:38:14.782005    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 18:38:14.782109    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:14.782140    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:14.782140    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:14.786748    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:38:14.787964    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:14.787964    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:14.788047    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:14.788047    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:14.795225    8396 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0507 18:38:15.278633    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 18:38:15.278782    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:15.278782    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:15.278782    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:15.283153    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:38:15.284429    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:15.284429    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:15.284511    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:15.284511    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:15.288511    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:38:15.776848    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 18:38:15.776848    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:15.776848    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:15.776848    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:15.780615    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:38:15.781377    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:15.781377    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:15.781377    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:15.781377    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:15.785968    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:38:16.277424    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 18:38:16.277424    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:16.277424    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:16.277424    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:16.281572    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:38:16.283217    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:16.283217    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:16.283217    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:16.283304    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:16.287516    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:38:16.288179    8396 pod_ready.go:102] pod "etcd-ha-210800-m02" in "kube-system" namespace has status "Ready":"False"
	I0507 18:38:16.773095    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 18:38:16.773202    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:16.773202    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:16.773202    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:16.782408    8396 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0507 18:38:16.783445    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:16.783445    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:16.783445    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:16.783445    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:16.787040    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:38:17.278164    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 18:38:17.278228    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:17.278261    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:17.278261    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:17.282058    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:38:17.283893    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:17.283986    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:17.283986    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:17.283986    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:17.287711    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:38:17.776668    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 18:38:17.776758    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:17.776758    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:17.776837    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:17.785328    8396 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0507 18:38:17.786547    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:17.786547    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:17.786620    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:17.786620    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:17.790057    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:38:18.277169    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 18:38:18.277169    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:18.277169    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:18.277169    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:18.282767    8396 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 18:38:18.283809    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:18.283870    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:18.283870    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:18.283870    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:18.287818    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:38:18.288402    8396 pod_ready.go:102] pod "etcd-ha-210800-m02" in "kube-system" namespace has status "Ready":"False"
	I0507 18:38:18.781392    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 18:38:18.781472    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:18.781472    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:18.781472    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:18.786206    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:38:18.786999    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:18.786999    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:18.787087    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:18.787087    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:18.790937    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:38:19.282473    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 18:38:19.282473    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:19.282473    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:19.282566    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:19.286511    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:38:19.287882    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:19.287882    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:19.287882    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:19.287882    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:19.301568    8396 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0507 18:38:19.772556    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 18:38:19.772625    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:19.772694    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:19.772694    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:19.778977    8396 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0507 18:38:19.779831    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:19.780366    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:19.780366    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:19.780366    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:19.783046    8396 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 18:38:20.282938    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 18:38:20.283005    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:20.283005    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:20.283074    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:20.287537    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:38:20.288513    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:20.288607    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:20.288607    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:20.288607    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:20.292662    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:38:20.293302    8396 pod_ready.go:102] pod "etcd-ha-210800-m02" in "kube-system" namespace has status "Ready":"False"
	I0507 18:38:20.784037    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 18:38:20.784037    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:20.784037    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:20.784037    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:20.789152    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:38:20.790013    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:20.790084    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:20.790084    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:20.790084    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:20.794322    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:38:21.287286    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 18:38:21.287286    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:21.287286    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:21.287286    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:21.292862    8396 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 18:38:21.294871    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:21.294871    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:21.294871    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:21.294871    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:21.306491    8396 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0507 18:38:21.776226    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 18:38:21.776299    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:21.776299    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:21.776299    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:21.781532    8396 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 18:38:21.782163    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:21.782163    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:21.782163    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:21.782163    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:21.786865    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:38:22.283387    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 18:38:22.283387    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:22.283387    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:22.283696    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:22.288416    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:38:22.288416    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:22.288416    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:22.289231    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:22.289231    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:22.293315    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:38:22.293839    8396 pod_ready.go:102] pod "etcd-ha-210800-m02" in "kube-system" namespace has status "Ready":"False"
	I0507 18:38:22.777200    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 18:38:22.777200    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:22.777200    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:22.777200    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:22.781680    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:38:22.782621    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:22.782726    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:22.782726    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:22.782726    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:22.789055    8396 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0507 18:38:22.790670    8396 pod_ready.go:92] pod "etcd-ha-210800-m02" in "kube-system" namespace has status "Ready":"True"
	I0507 18:38:22.790733    8396 pod_ready.go:81] duration metric: took 8.518358s for pod "etcd-ha-210800-m02" in "kube-system" namespace to be "Ready" ...
	I0507 18:38:22.790733    8396 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-210800" in "kube-system" namespace to be "Ready" ...
	I0507 18:38:22.790832    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-210800
	I0507 18:38:22.790832    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:22.790832    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:22.790832    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:22.795694    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:38:22.801928    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800
	I0507 18:38:22.801928    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:22.801928    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:22.801928    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:22.807452    8396 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 18:38:22.808657    8396 pod_ready.go:92] pod "kube-apiserver-ha-210800" in "kube-system" namespace has status "Ready":"True"
	I0507 18:38:22.808657    8396 pod_ready.go:81] duration metric: took 17.9226ms for pod "kube-apiserver-ha-210800" in "kube-system" namespace to be "Ready" ...
	I0507 18:38:22.808657    8396 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-210800-m02" in "kube-system" namespace to be "Ready" ...
	I0507 18:38:22.808657    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-210800-m02
	I0507 18:38:22.808657    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:22.808657    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:22.808657    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:22.813414    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:38:22.813829    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:22.813829    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:22.813829    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:22.813829    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:22.818566    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:38:22.818984    8396 pod_ready.go:92] pod "kube-apiserver-ha-210800-m02" in "kube-system" namespace has status "Ready":"True"
	I0507 18:38:22.819041    8396 pod_ready.go:81] duration metric: took 10.3835ms for pod "kube-apiserver-ha-210800-m02" in "kube-system" namespace to be "Ready" ...
	I0507 18:38:22.819041    8396 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-210800" in "kube-system" namespace to be "Ready" ...
	I0507 18:38:22.819147    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-210800
	I0507 18:38:22.819147    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:22.819215    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:22.819215    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:22.823456    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:38:22.824159    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800
	I0507 18:38:22.824159    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:22.824159    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:22.824210    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:22.832757    8396 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0507 18:38:22.832757    8396 pod_ready.go:92] pod "kube-controller-manager-ha-210800" in "kube-system" namespace has status "Ready":"True"
	I0507 18:38:22.832757    8396 pod_ready.go:81] duration metric: took 13.7152ms for pod "kube-controller-manager-ha-210800" in "kube-system" namespace to be "Ready" ...
	I0507 18:38:22.832757    8396 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-210800-m02" in "kube-system" namespace to be "Ready" ...
	I0507 18:38:22.832757    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-210800-m02
	I0507 18:38:22.832757    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:22.832757    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:22.832757    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:22.855467    8396 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0507 18:38:22.856263    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:22.856263    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:22.856263    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:22.856324    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:22.861130    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:38:22.862529    8396 pod_ready.go:92] pod "kube-controller-manager-ha-210800-m02" in "kube-system" namespace has status "Ready":"True"
	I0507 18:38:22.862529    8396 pod_ready.go:81] duration metric: took 29.7698ms for pod "kube-controller-manager-ha-210800-m02" in "kube-system" namespace to be "Ready" ...
	I0507 18:38:22.862529    8396 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6qdqt" in "kube-system" namespace to be "Ready" ...
	I0507 18:38:22.981644    8396 request.go:629] Waited for 118.7757ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6qdqt
	I0507 18:38:22.981744    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6qdqt
	I0507 18:38:22.981744    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:22.981744    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:22.981828    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:22.987571    8396 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 18:38:23.186839    8396 request.go:629] Waited for 197.9896ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/nodes/ha-210800
	I0507 18:38:23.187078    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800
	I0507 18:38:23.187078    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:23.187078    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:23.187078    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:23.191351    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:38:23.193098    8396 pod_ready.go:92] pod "kube-proxy-6qdqt" in "kube-system" namespace has status "Ready":"True"
	I0507 18:38:23.193223    8396 pod_ready.go:81] duration metric: took 330.5658ms for pod "kube-proxy-6qdqt" in "kube-system" namespace to be "Ready" ...
	I0507 18:38:23.193223    8396 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rshfg" in "kube-system" namespace to be "Ready" ...
	I0507 18:38:23.389300    8396 request.go:629] Waited for 195.9536ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rshfg
	I0507 18:38:23.389300    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rshfg
	I0507 18:38:23.389300    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:23.389724    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:23.389724    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:23.394095    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:38:23.589966    8396 request.go:629] Waited for 195.0566ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:23.589966    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:23.589966    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:23.589966    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:23.589966    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:23.595314    8396 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 18:38:23.597174    8396 pod_ready.go:92] pod "kube-proxy-rshfg" in "kube-system" namespace has status "Ready":"True"
	I0507 18:38:23.597240    8396 pod_ready.go:81] duration metric: took 403.9895ms for pod "kube-proxy-rshfg" in "kube-system" namespace to be "Ready" ...
	I0507 18:38:23.597307    8396 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-210800" in "kube-system" namespace to be "Ready" ...
	I0507 18:38:23.777721    8396 request.go:629] Waited for 180.0324ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-210800
	I0507 18:38:23.778094    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-210800
	I0507 18:38:23.778186    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:23.778186    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:23.778186    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:23.782724    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:38:23.980754    8396 request.go:629] Waited for 197.092ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/nodes/ha-210800
	I0507 18:38:23.981046    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800
	I0507 18:38:23.981046    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:23.981237    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:23.981346    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:23.985610    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:38:23.986989    8396 pod_ready.go:92] pod "kube-scheduler-ha-210800" in "kube-system" namespace has status "Ready":"True"
	I0507 18:38:23.986989    8396 pod_ready.go:81] duration metric: took 389.6559ms for pod "kube-scheduler-ha-210800" in "kube-system" namespace to be "Ready" ...
	I0507 18:38:23.987100    8396 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-210800-m02" in "kube-system" namespace to be "Ready" ...
	I0507 18:38:24.183923    8396 request.go:629] Waited for 196.572ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-210800-m02
	I0507 18:38:24.184210    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-210800-m02
	I0507 18:38:24.184361    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:24.184361    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:24.184361    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:24.188680    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:38:24.387628    8396 request.go:629] Waited for 197.6213ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:24.388070    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:38:24.388070    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:24.388157    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:24.388157    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:24.391350    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:38:24.392892    8396 pod_ready.go:92] pod "kube-scheduler-ha-210800-m02" in "kube-system" namespace has status "Ready":"True"
	I0507 18:38:24.392892    8396 pod_ready.go:81] duration metric: took 405.7642ms for pod "kube-scheduler-ha-210800-m02" in "kube-system" namespace to be "Ready" ...
	I0507 18:38:24.392956    8396 pod_ready.go:38] duration metric: took 10.1625532s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0507 18:38:24.392956    8396 api_server.go:52] waiting for apiserver process to appear ...
	I0507 18:38:24.401692    8396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0507 18:38:24.426342    8396 api_server.go:72] duration metric: took 17.6152636s to wait for apiserver process to appear ...
	I0507 18:38:24.426342    8396 api_server.go:88] waiting for apiserver healthz status ...
	I0507 18:38:24.426399    8396 api_server.go:253] Checking apiserver healthz at https://172.19.132.69:8443/healthz ...
	I0507 18:38:24.435538    8396 api_server.go:279] https://172.19.132.69:8443/healthz returned 200:
	ok
	I0507 18:38:24.436514    8396 round_trippers.go:463] GET https://172.19.132.69:8443/version
	I0507 18:38:24.436553    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:24.436553    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:24.436553    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:24.437592    8396 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0507 18:38:24.437843    8396 api_server.go:141] control plane version: v1.30.0
	I0507 18:38:24.437843    8396 api_server.go:131] duration metric: took 11.4995ms to wait for apiserver health ...
	I0507 18:38:24.437843    8396 system_pods.go:43] waiting for kube-system pods to appear ...
	I0507 18:38:24.591225    8396 request.go:629] Waited for 153.144ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods
	I0507 18:38:24.591315    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods
	I0507 18:38:24.591315    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:24.591431    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:24.591431    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:24.599025    8396 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0507 18:38:24.606019    8396 system_pods.go:59] 17 kube-system pods found
	I0507 18:38:24.606019    8396 system_pods.go:61] "coredns-7db6d8ff4d-cr9nn" [24c45106-2ef4-4932-ae5d-549fb0177b13] Running
	I0507 18:38:24.606019    8396 system_pods.go:61] "coredns-7db6d8ff4d-dxsqf" [d32c637e-c641-4ef7-b2ed-b6449fe7d50f] Running
	I0507 18:38:24.606019    8396 system_pods.go:61] "etcd-ha-210800" [6888d4a2-b10e-4329-b3de-90fc4bb053f3] Running
	I0507 18:38:24.606019    8396 system_pods.go:61] "etcd-ha-210800-m02" [97f10401-7c02-421d-abe4-2b9f37dd3f39] Running
	I0507 18:38:24.606019    8396 system_pods.go:61] "kindnet-57g8k" [6067a407-ee57-44ab-9591-9217deded72a] Running
	I0507 18:38:24.606019    8396 system_pods.go:61] "kindnet-whrqx" [ded04b26-3100-453a-9c0f-0a7cced93180] Running
	I0507 18:38:24.606019    8396 system_pods.go:61] "kube-apiserver-ha-210800" [74b614eb-d1ef-4707-b1a9-faeb68a9abf4] Running
	I0507 18:38:24.606019    8396 system_pods.go:61] "kube-apiserver-ha-210800-m02" [3399e7eb-50f0-49a6-9dbe-1d5964e62a63] Running
	I0507 18:38:24.606019    8396 system_pods.go:61] "kube-controller-manager-ha-210800" [9d31f6b7-c758-4599-9087-d38a0f929769] Running
	I0507 18:38:24.606019    8396 system_pods.go:61] "kube-controller-manager-ha-210800-m02" [e20ed11b-7d94-407a-a1cb-0440b3b29eb9] Running
	I0507 18:38:24.606019    8396 system_pods.go:61] "kube-proxy-6qdqt" [83aff3e5-b08d-4b7e-8dc2-c2fd1fd9bec7] Running
	I0507 18:38:24.606019    8396 system_pods.go:61] "kube-proxy-rshfg" [2ce7075a-2b4a-4e31-80bf-7de27797a8d6] Running
	I0507 18:38:24.606019    8396 system_pods.go:61] "kube-scheduler-ha-210800" [37fbafc0-eae6-407e-8b45-9c0181aca8dc] Running
	I0507 18:38:24.606019    8396 system_pods.go:61] "kube-scheduler-ha-210800-m02" [51a4f5d3-0f41-4420-87ce-5ac44bb93e3c] Running
	I0507 18:38:24.606019    8396 system_pods.go:61] "kube-vip-ha-210800" [b1216eb2-830b-4756-97c6-a35d5e74c718] Running
	I0507 18:38:24.606019    8396 system_pods.go:61] "kube-vip-ha-210800-m02" [ff2f83aa-9bdb-4dfc-98bf-d632984ef52d] Running
	I0507 18:38:24.606019    8396 system_pods.go:61] "storage-provisioner" [f05f26ec-1ebd-4111-adc5-825fc75a414d] Running
	I0507 18:38:24.606019    8396 system_pods.go:74] duration metric: took 168.1649ms to wait for pod list to return data ...
	I0507 18:38:24.606019    8396 default_sa.go:34] waiting for default service account to be created ...
	I0507 18:38:24.778357    8396 request.go:629] Waited for 172.1018ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/namespaces/default/serviceaccounts
	I0507 18:38:24.778357    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/default/serviceaccounts
	I0507 18:38:24.778357    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:24.778357    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:24.778357    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:24.785539    8396 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0507 18:38:24.785539    8396 default_sa.go:45] found service account: "default"
	I0507 18:38:24.785539    8396 default_sa.go:55] duration metric: took 179.5076ms for default service account to be created ...
	I0507 18:38:24.785539    8396 system_pods.go:116] waiting for k8s-apps to be running ...
	I0507 18:38:24.981805    8396 request.go:629] Waited for 196.2519ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods
	I0507 18:38:24.982397    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods
	I0507 18:38:24.982397    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:24.982506    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:24.982506    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:24.989973    8396 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0507 18:38:24.996462    8396 system_pods.go:86] 17 kube-system pods found
	I0507 18:38:24.996462    8396 system_pods.go:89] "coredns-7db6d8ff4d-cr9nn" [24c45106-2ef4-4932-ae5d-549fb0177b13] Running
	I0507 18:38:24.996462    8396 system_pods.go:89] "coredns-7db6d8ff4d-dxsqf" [d32c637e-c641-4ef7-b2ed-b6449fe7d50f] Running
	I0507 18:38:24.996462    8396 system_pods.go:89] "etcd-ha-210800" [6888d4a2-b10e-4329-b3de-90fc4bb053f3] Running
	I0507 18:38:24.996462    8396 system_pods.go:89] "etcd-ha-210800-m02" [97f10401-7c02-421d-abe4-2b9f37dd3f39] Running
	I0507 18:38:24.996462    8396 system_pods.go:89] "kindnet-57g8k" [6067a407-ee57-44ab-9591-9217deded72a] Running
	I0507 18:38:24.996462    8396 system_pods.go:89] "kindnet-whrqx" [ded04b26-3100-453a-9c0f-0a7cced93180] Running
	I0507 18:38:24.996462    8396 system_pods.go:89] "kube-apiserver-ha-210800" [74b614eb-d1ef-4707-b1a9-faeb68a9abf4] Running
	I0507 18:38:24.996462    8396 system_pods.go:89] "kube-apiserver-ha-210800-m02" [3399e7eb-50f0-49a6-9dbe-1d5964e62a63] Running
	I0507 18:38:24.996462    8396 system_pods.go:89] "kube-controller-manager-ha-210800" [9d31f6b7-c758-4599-9087-d38a0f929769] Running
	I0507 18:38:24.997012    8396 system_pods.go:89] "kube-controller-manager-ha-210800-m02" [e20ed11b-7d94-407a-a1cb-0440b3b29eb9] Running
	I0507 18:38:24.997012    8396 system_pods.go:89] "kube-proxy-6qdqt" [83aff3e5-b08d-4b7e-8dc2-c2fd1fd9bec7] Running
	I0507 18:38:24.997012    8396 system_pods.go:89] "kube-proxy-rshfg" [2ce7075a-2b4a-4e31-80bf-7de27797a8d6] Running
	I0507 18:38:24.997066    8396 system_pods.go:89] "kube-scheduler-ha-210800" [37fbafc0-eae6-407e-8b45-9c0181aca8dc] Running
	I0507 18:38:24.997066    8396 system_pods.go:89] "kube-scheduler-ha-210800-m02" [51a4f5d3-0f41-4420-87ce-5ac44bb93e3c] Running
	I0507 18:38:24.997066    8396 system_pods.go:89] "kube-vip-ha-210800" [b1216eb2-830b-4756-97c6-a35d5e74c718] Running
	I0507 18:38:24.997107    8396 system_pods.go:89] "kube-vip-ha-210800-m02" [ff2f83aa-9bdb-4dfc-98bf-d632984ef52d] Running
	I0507 18:38:24.997107    8396 system_pods.go:89] "storage-provisioner" [f05f26ec-1ebd-4111-adc5-825fc75a414d] Running
	I0507 18:38:24.997107    8396 system_pods.go:126] duration metric: took 211.5531ms to wait for k8s-apps to be running ...
	I0507 18:38:24.997107    8396 system_svc.go:44] waiting for kubelet service to be running ....
	I0507 18:38:25.004087    8396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0507 18:38:25.027846    8396 system_svc.go:56] duration metric: took 30.7369ms WaitForService to wait for kubelet
	I0507 18:38:25.027961    8396 kubeadm.go:576] duration metric: took 18.2167808s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0507 18:38:25.028009    8396 node_conditions.go:102] verifying NodePressure condition ...
	I0507 18:38:25.184161    8396 request.go:629] Waited for 155.8275ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/nodes
	I0507 18:38:25.184517    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes
	I0507 18:38:25.184517    8396 round_trippers.go:469] Request Headers:
	I0507 18:38:25.184624    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:38:25.184624    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:38:25.188899    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:38:25.190314    8396 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0507 18:38:25.190314    8396 node_conditions.go:123] node cpu capacity is 2
	I0507 18:38:25.190314    8396 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0507 18:38:25.190314    8396 node_conditions.go:123] node cpu capacity is 2
	I0507 18:38:25.190314    8396 node_conditions.go:105] duration metric: took 162.2931ms to run NodePressure ...
	I0507 18:38:25.190314    8396 start.go:240] waiting for startup goroutines ...
	I0507 18:38:25.190314    8396 start.go:254] writing updated cluster config ...
	I0507 18:38:25.194052    8396 out.go:177] 
	I0507 18:38:25.208158    8396 config.go:182] Loaded profile config "ha-210800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 18:38:25.209156    8396 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\config.json ...
	I0507 18:38:25.213290    8396 out.go:177] * Starting "ha-210800-m03" control-plane node in "ha-210800" cluster
	I0507 18:38:25.220142    8396 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0507 18:38:25.220142    8396 cache.go:56] Caching tarball of preloaded images
	I0507 18:38:25.220142    8396 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0507 18:38:25.220142    8396 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0507 18:38:25.220142    8396 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\config.json ...
	I0507 18:38:25.223408    8396 start.go:360] acquireMachinesLock for ha-210800-m03: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 18:38:25.224205    8396 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-210800-m03"
	I0507 18:38:25.224386    8396 start.go:93] Provisioning new machine with config: &{Name:ha-210800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuberne
tesVersion:v1.30.0 ClusterName:ha-210800 Namespace:default APIServerHAVIP:172.19.143.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.132.69 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.135.87 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Binar
yMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0507 18:38:25.224386    8396 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0507 18:38:25.226689    8396 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0507 18:38:25.227557    8396 start.go:159] libmachine.API.Create for "ha-210800" (driver="hyperv")
	I0507 18:38:25.227619    8396 client.go:168] LocalClient.Create starting
	I0507 18:38:25.227798    8396 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0507 18:38:25.228129    8396 main.go:141] libmachine: Decoding PEM data...
	I0507 18:38:25.228129    8396 main.go:141] libmachine: Parsing certificate...
	I0507 18:38:25.228287    8396 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0507 18:38:25.228418    8396 main.go:141] libmachine: Decoding PEM data...
	I0507 18:38:25.228418    8396 main.go:141] libmachine: Parsing certificate...
	I0507 18:38:25.228418    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0507 18:38:26.923053    8396 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0507 18:38:26.923053    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:38:26.924135    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0507 18:38:28.464544    8396 main.go:141] libmachine: [stdout =====>] : False
	
	I0507 18:38:28.464822    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:38:28.464822    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0507 18:38:29.824471    8396 main.go:141] libmachine: [stdout =====>] : True
	
	I0507 18:38:29.824471    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:38:29.824985    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0507 18:38:33.103075    8396 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0507 18:38:33.103166    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:38:33.104962    8396 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso...
	I0507 18:38:33.402984    8396 main.go:141] libmachine: Creating SSH key...
	I0507 18:38:33.702725    8396 main.go:141] libmachine: Creating VM...
	I0507 18:38:33.702725    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0507 18:38:36.303166    8396 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0507 18:38:36.304021    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:38:36.304100    8396 main.go:141] libmachine: Using switch "Default Switch"
	I0507 18:38:36.304100    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0507 18:38:37.899524    8396 main.go:141] libmachine: [stdout =====>] : True
	
	I0507 18:38:37.899524    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:38:37.899524    8396 main.go:141] libmachine: Creating VHD
	I0507 18:38:37.899524    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0507 18:38:41.384681    8396 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : CE41D955-8D91-4A76-A8C2-269EA17A2698
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0507 18:38:41.385192    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:38:41.385192    8396 main.go:141] libmachine: Writing magic tar header
	I0507 18:38:41.385192    8396 main.go:141] libmachine: Writing SSH key tar header
	I0507 18:38:41.393966    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0507 18:38:44.368608    8396 main.go:141] libmachine: [stdout =====>] : 
	I0507 18:38:44.369328    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:38:44.369328    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800-m03\disk.vhd' -SizeBytes 20000MB
	I0507 18:38:46.683575    8396 main.go:141] libmachine: [stdout =====>] : 
	I0507 18:38:46.683575    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:38:46.683575    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-210800-m03 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0507 18:38:49.959558    8396 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-210800-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0507 18:38:49.959558    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:38:49.959911    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-210800-m03 -DynamicMemoryEnabled $false
	I0507 18:38:51.997094    8396 main.go:141] libmachine: [stdout =====>] : 
	I0507 18:38:51.997444    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:38:51.997600    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-210800-m03 -Count 2
	I0507 18:38:53.983264    8396 main.go:141] libmachine: [stdout =====>] : 
	I0507 18:38:53.983472    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:38:53.983472    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-210800-m03 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800-m03\boot2docker.iso'
	I0507 18:38:56.255555    8396 main.go:141] libmachine: [stdout =====>] : 
	I0507 18:38:56.255709    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:38:56.255709    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-210800-m03 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800-m03\disk.vhd'
	I0507 18:38:58.625911    8396 main.go:141] libmachine: [stdout =====>] : 
	I0507 18:38:58.625911    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:38:58.625911    8396 main.go:141] libmachine: Starting VM...
	I0507 18:38:58.625911    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-210800-m03
	I0507 18:39:01.407243    8396 main.go:141] libmachine: [stdout =====>] : 
	I0507 18:39:01.408246    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:39:01.408301    8396 main.go:141] libmachine: Waiting for host to start...
	I0507 18:39:01.408301    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m03 ).state
	I0507 18:39:03.435590    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:39:03.435590    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:39:03.435679    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m03 ).networkadapters[0]).ipaddresses[0]
	I0507 18:39:05.672622    8396 main.go:141] libmachine: [stdout =====>] : 
	I0507 18:39:05.673363    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:39:06.685664    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m03 ).state
	I0507 18:39:08.653123    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:39:08.653159    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:39:08.653305    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m03 ).networkadapters[0]).ipaddresses[0]
	I0507 18:39:10.914519    8396 main.go:141] libmachine: [stdout =====>] : 
	I0507 18:39:10.914962    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:39:11.922875    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m03 ).state
	I0507 18:39:13.925351    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:39:13.926246    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:39:13.926444    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m03 ).networkadapters[0]).ipaddresses[0]
	I0507 18:39:16.221152    8396 main.go:141] libmachine: [stdout =====>] : 
	I0507 18:39:16.221190    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:39:17.226850    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m03 ).state
	I0507 18:39:19.225396    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:39:19.225396    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:39:19.225396    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m03 ).networkadapters[0]).ipaddresses[0]
	I0507 18:39:21.503921    8396 main.go:141] libmachine: [stdout =====>] : 
	I0507 18:39:21.503921    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:39:22.505063    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m03 ).state
	I0507 18:39:24.502024    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:39:24.502824    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:39:24.502824    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m03 ).networkadapters[0]).ipaddresses[0]
	I0507 18:39:26.883964    8396 main.go:141] libmachine: [stdout =====>] : 172.19.137.224
	
	I0507 18:39:26.883964    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:39:26.884079    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m03 ).state
	I0507 18:39:28.850902    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:39:28.851221    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:39:28.851221    8396 machine.go:94] provisionDockerMachine start ...
	I0507 18:39:28.851221    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m03 ).state
	I0507 18:39:30.772220    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:39:30.772220    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:39:30.772317    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m03 ).networkadapters[0]).ipaddresses[0]
	I0507 18:39:33.054876    8396 main.go:141] libmachine: [stdout =====>] : 172.19.137.224
	
	I0507 18:39:33.055401    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:39:33.059226    8396 main.go:141] libmachine: Using SSH client type: native
	I0507 18:39:33.059914    8396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.137.224 22 <nil> <nil>}
	I0507 18:39:33.059914    8396 main.go:141] libmachine: About to run SSH command:
	hostname
	I0507 18:39:33.194916    8396 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0507 18:39:33.194995    8396 buildroot.go:166] provisioning hostname "ha-210800-m03"
	I0507 18:39:33.194995    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m03 ).state
	I0507 18:39:35.122163    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:39:35.122163    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:39:35.122163    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m03 ).networkadapters[0]).ipaddresses[0]
	I0507 18:39:37.407198    8396 main.go:141] libmachine: [stdout =====>] : 172.19.137.224
	
	I0507 18:39:37.407198    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:39:37.412672    8396 main.go:141] libmachine: Using SSH client type: native
	I0507 18:39:37.413395    8396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.137.224 22 <nil> <nil>}
	I0507 18:39:37.413395    8396 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-210800-m03 && echo "ha-210800-m03" | sudo tee /etc/hostname
	I0507 18:39:37.570948    8396 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-210800-m03
	
	I0507 18:39:37.570948    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m03 ).state
	I0507 18:39:39.474599    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:39:39.474661    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:39:39.475005    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m03 ).networkadapters[0]).ipaddresses[0]
	I0507 18:39:41.756911    8396 main.go:141] libmachine: [stdout =====>] : 172.19.137.224
	
	I0507 18:39:41.757146    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:39:41.760782    8396 main.go:141] libmachine: Using SSH client type: native
	I0507 18:39:41.761310    8396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.137.224 22 <nil> <nil>}
	I0507 18:39:41.761310    8396 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-210800-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-210800-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-210800-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0507 18:39:41.908119    8396 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0507 18:39:41.908119    8396 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0507 18:39:41.908119    8396 buildroot.go:174] setting up certificates
	I0507 18:39:41.908119    8396 provision.go:84] configureAuth start
	I0507 18:39:41.908772    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m03 ).state
	I0507 18:39:43.828143    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:39:43.829277    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:39:43.829277    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m03 ).networkadapters[0]).ipaddresses[0]
	I0507 18:39:46.130130    8396 main.go:141] libmachine: [stdout =====>] : 172.19.137.224
	
	I0507 18:39:46.130212    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:39:46.130212    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m03 ).state
	I0507 18:39:48.046300    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:39:48.046300    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:39:48.046394    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m03 ).networkadapters[0]).ipaddresses[0]
	I0507 18:39:50.364882    8396 main.go:141] libmachine: [stdout =====>] : 172.19.137.224
	
	I0507 18:39:50.364882    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:39:50.364882    8396 provision.go:143] copyHostCerts
	I0507 18:39:50.365540    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0507 18:39:50.365749    8396 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0507 18:39:50.365749    8396 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0507 18:39:50.365749    8396 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0507 18:39:50.366808    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0507 18:39:50.366808    8396 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0507 18:39:50.366808    8396 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0507 18:39:50.366808    8396 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0507 18:39:50.368132    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0507 18:39:50.368132    8396 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0507 18:39:50.368132    8396 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0507 18:39:50.368132    8396 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0507 18:39:50.369282    8396 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-210800-m03 san=[127.0.0.1 172.19.137.224 ha-210800-m03 localhost minikube]
	I0507 18:39:50.528513    8396 provision.go:177] copyRemoteCerts
	I0507 18:39:50.541304    8396 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0507 18:39:50.541304    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m03 ).state
	I0507 18:39:52.470874    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:39:52.470874    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:39:52.470874    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m03 ).networkadapters[0]).ipaddresses[0]
	I0507 18:39:54.751170    8396 main.go:141] libmachine: [stdout =====>] : 172.19.137.224
	
	I0507 18:39:54.751844    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:39:54.752236    8396 sshutil.go:53] new ssh client: &{IP:172.19.137.224 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800-m03\id_rsa Username:docker}
	I0507 18:39:54.856978    8396 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.3152939s)
	I0507 18:39:54.856978    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0507 18:39:54.857455    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0507 18:39:54.899673    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0507 18:39:54.899947    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0507 18:39:54.941904    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0507 18:39:54.942130    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0507 18:39:54.987116    8396 provision.go:87] duration metric: took 13.0781064s to configureAuth
	I0507 18:39:54.987116    8396 buildroot.go:189] setting minikube options for container-runtime
	I0507 18:39:54.987578    8396 config.go:182] Loaded profile config "ha-210800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 18:39:54.987650    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m03 ).state
	I0507 18:39:56.886507    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:39:56.887138    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:39:56.887138    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m03 ).networkadapters[0]).ipaddresses[0]
	I0507 18:39:59.197564    8396 main.go:141] libmachine: [stdout =====>] : 172.19.137.224
	
	I0507 18:39:59.197651    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:39:59.204637    8396 main.go:141] libmachine: Using SSH client type: native
	I0507 18:39:59.204637    8396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.137.224 22 <nil> <nil>}
	I0507 18:39:59.204637    8396 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0507 18:39:59.331642    8396 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0507 18:39:59.331642    8396 buildroot.go:70] root file system type: tmpfs
	I0507 18:39:59.331642    8396 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0507 18:39:59.331642    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m03 ).state
	I0507 18:40:01.265541    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:40:01.265541    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:40:01.265541    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m03 ).networkadapters[0]).ipaddresses[0]
	I0507 18:40:03.666802    8396 main.go:141] libmachine: [stdout =====>] : 172.19.137.224
	
	I0507 18:40:03.666880    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:40:03.673288    8396 main.go:141] libmachine: Using SSH client type: native
	I0507 18:40:03.673845    8396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.137.224 22 <nil> <nil>}
	I0507 18:40:03.673845    8396 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.19.132.69"
	Environment="NO_PROXY=172.19.132.69,172.19.135.87"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0507 18:40:03.830444    8396 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.19.132.69
	Environment=NO_PROXY=172.19.132.69,172.19.135.87
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0507 18:40:03.830444    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m03 ).state
	I0507 18:40:05.804748    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:40:05.805302    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:40:05.805379    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m03 ).networkadapters[0]).ipaddresses[0]
	I0507 18:40:08.158769    8396 main.go:141] libmachine: [stdout =====>] : 172.19.137.224
	
	I0507 18:40:08.158769    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:40:08.163009    8396 main.go:141] libmachine: Using SSH client type: native
	I0507 18:40:08.163621    8396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.137.224 22 <nil> <nil>}
	I0507 18:40:08.163621    8396 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0507 18:40:10.299214    8396 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0507 18:40:10.299214    8396 machine.go:97] duration metric: took 41.4451728s to provisionDockerMachine
	I0507 18:40:10.299214    8396 client.go:171] duration metric: took 1m45.0644265s to LocalClient.Create
	I0507 18:40:10.299214    8396 start.go:167] duration metric: took 1m45.0653573s to libmachine.API.Create "ha-210800"
	I0507 18:40:10.299214    8396 start.go:293] postStartSetup for "ha-210800-m03" (driver="hyperv")
	I0507 18:40:10.299214    8396 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0507 18:40:10.307679    8396 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0507 18:40:10.307679    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m03 ).state
	I0507 18:40:12.284014    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:40:12.284699    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:40:12.284699    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m03 ).networkadapters[0]).ipaddresses[0]
	I0507 18:40:14.599789    8396 main.go:141] libmachine: [stdout =====>] : 172.19.137.224
	
	I0507 18:40:14.599789    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:40:14.600332    8396 sshutil.go:53] new ssh client: &{IP:172.19.137.224 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800-m03\id_rsa Username:docker}
	I0507 18:40:14.711988    8396 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.4039314s)
	I0507 18:40:14.724592    8396 ssh_runner.go:195] Run: cat /etc/os-release
	I0507 18:40:14.733406    8396 info.go:137] Remote host: Buildroot 2023.02.9
	I0507 18:40:14.733406    8396 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0507 18:40:14.733997    8396 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0507 18:40:14.734112    8396 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem -> 99922.pem in /etc/ssl/certs
	I0507 18:40:14.734112    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem -> /etc/ssl/certs/99922.pem
	I0507 18:40:14.743038    8396 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0507 18:40:14.760893    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem --> /etc/ssl/certs/99922.pem (1708 bytes)
	I0507 18:40:14.814797    8396 start.go:296] duration metric: took 4.5152764s for postStartSetup
	I0507 18:40:14.818517    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m03 ).state
	I0507 18:40:16.734414    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:40:16.734414    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:40:16.735033    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m03 ).networkadapters[0]).ipaddresses[0]
	I0507 18:40:19.083500    8396 main.go:141] libmachine: [stdout =====>] : 172.19.137.224
	
	I0507 18:40:19.083579    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:40:19.083579    8396 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\config.json ...
	I0507 18:40:19.085654    8396 start.go:128] duration metric: took 1m53.8535035s to createHost
	I0507 18:40:19.085729    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m03 ).state
	I0507 18:40:21.019148    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:40:21.019148    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:40:21.019148    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m03 ).networkadapters[0]).ipaddresses[0]
	I0507 18:40:23.359162    8396 main.go:141] libmachine: [stdout =====>] : 172.19.137.224
	
	I0507 18:40:23.359162    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:40:23.362701    8396 main.go:141] libmachine: Using SSH client type: native
	I0507 18:40:23.363299    8396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.137.224 22 <nil> <nil>}
	I0507 18:40:23.363299    8396 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0507 18:40:23.499768    8396 main.go:141] libmachine: SSH cmd err, output: <nil>: 1715107223.738113002
	
	I0507 18:40:23.499768    8396 fix.go:216] guest clock: 1715107223.738113002
	I0507 18:40:23.499768    8396 fix.go:229] Guest: 2024-05-07 18:40:23.738113002 +0000 UTC Remote: 2024-05-07 18:40:19.0856542 +0000 UTC m=+518.903648801 (delta=4.652458802s)
	I0507 18:40:23.499768    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m03 ).state
	I0507 18:40:25.439326    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:40:25.440046    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:40:25.440046    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m03 ).networkadapters[0]).ipaddresses[0]
	I0507 18:40:27.774327    8396 main.go:141] libmachine: [stdout =====>] : 172.19.137.224
	
	I0507 18:40:27.774327    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:40:27.779654    8396 main.go:141] libmachine: Using SSH client type: native
	I0507 18:40:27.780034    8396 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.137.224 22 <nil> <nil>}
	I0507 18:40:27.780034    8396 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1715107223
	I0507 18:40:27.915035    8396 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue May  7 18:40:23 UTC 2024
	
	I0507 18:40:27.915035    8396 fix.go:236] clock set: Tue May  7 18:40:23 UTC 2024
	 (err=<nil>)
	I0507 18:40:27.915035    8396 start.go:83] releasing machines lock for "ha-210800-m03", held for 2m2.6824314s
	I0507 18:40:27.915573    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m03 ).state
	I0507 18:40:29.815560    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:40:29.815560    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:40:29.816069    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m03 ).networkadapters[0]).ipaddresses[0]
	I0507 18:40:32.120627    8396 main.go:141] libmachine: [stdout =====>] : 172.19.137.224
	
	I0507 18:40:32.120627    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:40:32.124530    8396 out.go:177] * Found network options:
	I0507 18:40:32.143758    8396 out.go:177]   - NO_PROXY=172.19.132.69,172.19.135.87
	W0507 18:40:32.147240    8396 proxy.go:119] fail to check proxy env: Error ip not in block
	W0507 18:40:32.147240    8396 proxy.go:119] fail to check proxy env: Error ip not in block
	I0507 18:40:32.153318    8396 out.go:177]   - NO_PROXY=172.19.132.69,172.19.135.87
	W0507 18:40:32.155453    8396 proxy.go:119] fail to check proxy env: Error ip not in block
	W0507 18:40:32.155453    8396 proxy.go:119] fail to check proxy env: Error ip not in block
	W0507 18:40:32.156193    8396 proxy.go:119] fail to check proxy env: Error ip not in block
	W0507 18:40:32.156193    8396 proxy.go:119] fail to check proxy env: Error ip not in block
	I0507 18:40:32.158869    8396 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0507 18:40:32.158976    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m03 ).state
	I0507 18:40:32.166378    8396 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0507 18:40:32.166378    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m03 ).state
	I0507 18:40:34.167883    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:40:34.167883    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:40:34.167883    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:40:34.168230    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:40:34.168230    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m03 ).networkadapters[0]).ipaddresses[0]
	I0507 18:40:34.168230    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m03 ).networkadapters[0]).ipaddresses[0]
	I0507 18:40:36.594910    8396 main.go:141] libmachine: [stdout =====>] : 172.19.137.224
	
	I0507 18:40:36.594910    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:40:36.595798    8396 sshutil.go:53] new ssh client: &{IP:172.19.137.224 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800-m03\id_rsa Username:docker}
	I0507 18:40:36.620180    8396 main.go:141] libmachine: [stdout =====>] : 172.19.137.224
	
	I0507 18:40:36.620180    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:40:36.620878    8396 sshutil.go:53] new ssh client: &{IP:172.19.137.224 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800-m03\id_rsa Username:docker}
	I0507 18:40:36.689138    8396 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.5224539s)
	W0507 18:40:36.689138    8396 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0507 18:40:36.697177    8396 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0507 18:40:36.760083    8396 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0507 18:40:36.760083    8396 start.go:494] detecting cgroup driver to use...
	I0507 18:40:36.760083    8396 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.6009022s)
	I0507 18:40:36.760083    8396 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0507 18:40:36.812555    8396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0507 18:40:36.840473    8396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0507 18:40:36.861051    8396 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0507 18:40:36.869048    8396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0507 18:40:36.896992    8396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0507 18:40:36.928440    8396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0507 18:40:36.958147    8396 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0507 18:40:36.985538    8396 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0507 18:40:37.012025    8396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0507 18:40:37.038205    8396 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0507 18:40:37.064841    8396 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0507 18:40:37.091488    8396 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0507 18:40:37.120567    8396 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0507 18:40:37.147354    8396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 18:40:37.324397    8396 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0507 18:40:37.354501    8396 start.go:494] detecting cgroup driver to use...
	I0507 18:40:37.364990    8396 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0507 18:40:37.394511    8396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0507 18:40:37.429110    8396 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0507 18:40:37.466721    8396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0507 18:40:37.497293    8396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0507 18:40:37.528447    8396 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0507 18:40:37.586411    8396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0507 18:40:37.608157    8396 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0507 18:40:37.652932    8396 ssh_runner.go:195] Run: which cri-dockerd
	I0507 18:40:37.668377    8396 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0507 18:40:37.684371    8396 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0507 18:40:37.720817    8396 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0507 18:40:37.900446    8396 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0507 18:40:38.072754    8396 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0507 18:40:38.073201    8396 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0507 18:40:38.117691    8396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 18:40:38.291018    8396 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0507 18:40:40.766433    8396 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.4752473s)
	I0507 18:40:40.775189    8396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0507 18:40:40.806689    8396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0507 18:40:40.838252    8396 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0507 18:40:41.032561    8396 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0507 18:40:41.211867    8396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 18:40:41.394761    8396 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0507 18:40:41.433405    8396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0507 18:40:41.465120    8396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 18:40:41.651300    8396 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0507 18:40:41.753835    8396 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0507 18:40:41.762737    8396 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0507 18:40:41.775852    8396 start.go:562] Will wait 60s for crictl version
	I0507 18:40:41.787692    8396 ssh_runner.go:195] Run: which crictl
	I0507 18:40:41.800241    8396 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0507 18:40:41.859953    8396 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0507 18:40:41.866286    8396 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0507 18:40:41.903013    8396 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0507 18:40:41.936560    8396 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0507 18:40:41.939758    8396 out.go:177]   - env NO_PROXY=172.19.132.69
	I0507 18:40:41.943054    8396 out.go:177]   - env NO_PROXY=172.19.132.69,172.19.135.87
	I0507 18:40:41.944759    8396 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0507 18:40:41.949518    8396 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0507 18:40:41.949574    8396 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0507 18:40:41.949574    8396 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0507 18:40:41.949574    8396 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:a3:a5:4f Flags:up|broadcast|multicast|running}
	I0507 18:40:41.952293    8396 ip.go:210] interface addr: fe80::1edb:f5fd:c218:d8d2/64
	I0507 18:40:41.952353    8396 ip.go:210] interface addr: 172.19.128.1/20
	I0507 18:40:41.961209    8396 ssh_runner.go:195] Run: grep 172.19.128.1	host.minikube.internal$ /etc/hosts
	I0507 18:40:41.966378    8396 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.128.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0507 18:40:41.989041    8396 mustload.go:65] Loading cluster: ha-210800
	I0507 18:40:41.989647    8396 config.go:182] Loaded profile config "ha-210800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 18:40:41.990345    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800 ).state
	I0507 18:40:43.927401    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:40:43.927401    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:40:43.927401    8396 host.go:66] Checking if "ha-210800" exists ...
	I0507 18:40:43.927924    8396 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800 for IP: 172.19.137.224
	I0507 18:40:43.927994    8396 certs.go:194] generating shared ca certs ...
	I0507 18:40:43.927994    8396 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 18:40:43.928517    8396 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0507 18:40:43.928594    8396 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0507 18:40:43.928594    8396 certs.go:256] generating profile certs ...
	I0507 18:40:43.929440    8396 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\client.key
	I0507 18:40:43.929440    8396 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.key.b99e8106
	I0507 18:40:43.929440    8396 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.crt.b99e8106 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.19.132.69 172.19.135.87 172.19.137.224 172.19.143.254]
	I0507 18:40:44.148518    8396 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.crt.b99e8106 ...
	I0507 18:40:44.148518    8396 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.crt.b99e8106: {Name:mk7a5e439aeccc02df3bdc8f3a9d3b314f05045d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 18:40:44.148956    8396 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.key.b99e8106 ...
	I0507 18:40:44.148956    8396 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.key.b99e8106: {Name:mk29a150d7d42cd36c6eb069713d060ebd6bf280 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 18:40:44.149877    8396 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.crt.b99e8106 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.crt
	I0507 18:40:44.163176    8396 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.key.b99e8106 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.key
	I0507 18:40:44.164446    8396 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\proxy-client.key
	I0507 18:40:44.164446    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0507 18:40:44.164645    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0507 18:40:44.164744    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0507 18:40:44.164849    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0507 18:40:44.164934    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0507 18:40:44.165118    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0507 18:40:44.165475    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0507 18:40:44.165730    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0507 18:40:44.166254    8396 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\9992.pem (1338 bytes)
	W0507 18:40:44.166570    8396 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\9992_empty.pem, impossibly tiny 0 bytes
	I0507 18:40:44.166761    8396 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0507 18:40:44.166967    8396 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0507 18:40:44.167269    8396 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0507 18:40:44.167429    8396 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0507 18:40:44.167965    8396 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem (1708 bytes)
	I0507 18:40:44.168273    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem -> /usr/share/ca-certificates/99922.pem
	I0507 18:40:44.168374    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0507 18:40:44.168598    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\9992.pem -> /usr/share/ca-certificates/9992.pem
	I0507 18:40:44.168837    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800 ).state
	I0507 18:40:46.106284    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:40:46.106512    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:40:46.106512    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800 ).networkadapters[0]).ipaddresses[0]
	I0507 18:40:48.393140    8396 main.go:141] libmachine: [stdout =====>] : 172.19.132.69
	
	I0507 18:40:48.393775    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:40:48.394056    8396 sshutil.go:53] new ssh client: &{IP:172.19.132.69 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800\id_rsa Username:docker}
	I0507 18:40:48.499894    8396 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0507 18:40:48.507123    8396 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0507 18:40:48.534235    8396 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0507 18:40:48.541633    8396 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0507 18:40:48.570274    8396 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0507 18:40:48.576675    8396 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0507 18:40:48.603825    8396 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0507 18:40:48.610435    8396 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1679 bytes)
	I0507 18:40:48.637866    8396 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0507 18:40:48.645537    8396 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0507 18:40:48.676084    8396 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0507 18:40:48.682781    8396 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0507 18:40:48.701521    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0507 18:40:48.751311    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0507 18:40:48.797997    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0507 18:40:48.843024    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0507 18:40:48.886804    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0507 18:40:48.932828    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0507 18:40:48.989081    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0507 18:40:49.033298    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\ha-210800\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0507 18:40:49.077394    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem --> /usr/share/ca-certificates/99922.pem (1708 bytes)
	I0507 18:40:49.126276    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0507 18:40:49.168189    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\9992.pem --> /usr/share/ca-certificates/9992.pem (1338 bytes)
	I0507 18:40:49.213827    8396 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0507 18:40:49.246065    8396 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0507 18:40:49.274555    8396 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0507 18:40:49.302545    8396 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1679 bytes)
	I0507 18:40:49.332108    8396 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0507 18:40:49.363802    8396 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0507 18:40:49.393835    8396 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0507 18:40:49.437185    8396 ssh_runner.go:195] Run: openssl version
	I0507 18:40:49.453938    8396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9992.pem && ln -fs /usr/share/ca-certificates/9992.pem /etc/ssl/certs/9992.pem"
	I0507 18:40:49.480996    8396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9992.pem
	I0507 18:40:49.487292    8396 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  7 18:15 /usr/share/ca-certificates/9992.pem
	I0507 18:40:49.497188    8396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9992.pem
	I0507 18:40:49.512950    8396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9992.pem /etc/ssl/certs/51391683.0"
	I0507 18:40:49.542063    8396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/99922.pem && ln -fs /usr/share/ca-certificates/99922.pem /etc/ssl/certs/99922.pem"
	I0507 18:40:49.568319    8396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/99922.pem
	I0507 18:40:49.575176    8396 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  7 18:15 /usr/share/ca-certificates/99922.pem
	I0507 18:40:49.582056    8396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/99922.pem
	I0507 18:40:49.599563    8396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/99922.pem /etc/ssl/certs/3ec20f2e.0"
	I0507 18:40:49.627992    8396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0507 18:40:49.654907    8396 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0507 18:40:49.661863    8396 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  7 18:01 /usr/share/ca-certificates/minikubeCA.pem
	I0507 18:40:49.671999    8396 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0507 18:40:49.688416    8396 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0507 18:40:49.721777    8396 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0507 18:40:49.729065    8396 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0507 18:40:49.729065    8396 kubeadm.go:928] updating node {m03 172.19.137.224 8443 v1.30.0 docker true true} ...
	I0507 18:40:49.729065    8396 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-210800-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.137.224
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-210800 Namespace:default APIServerHAVIP:172.19.143.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0507 18:40:49.729065    8396 kube-vip.go:111] generating kube-vip config ...
	I0507 18:40:49.736876    8396 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0507 18:40:49.763268    8396 kube-vip.go:163] auto-enabling control-plane load-balancing in kube-vip
	I0507 18:40:49.763361    8396 kube-vip.go:133] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.19.143.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0507 18:40:49.773500    8396 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0507 18:40:49.788235    8396 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0507 18:40:49.796814    8396 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0507 18:40:49.817778    8396 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256
	I0507 18:40:49.817778    8396 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256
	I0507 18:40:49.817778    8396 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256
	I0507 18:40:49.817778    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0507 18:40:49.817778    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0507 18:40:49.829549    8396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0507 18:40:49.829549    8396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0507 18:40:49.829549    8396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0507 18:40:49.838655    8396 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0507 18:40:49.838655    8396 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0507 18:40:49.838655    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0507 18:40:49.838655    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0507 18:40:49.885430    8396 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0507 18:40:49.894800    8396 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0507 18:40:50.015305    8396 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0507 18:40:50.015420    8396 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0507 18:40:51.132208    8396 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0507 18:40:51.150273    8396 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (314 bytes)
	I0507 18:40:51.185092    8396 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0507 18:40:51.221630    8396 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0507 18:40:51.263531    8396 ssh_runner.go:195] Run: grep 172.19.143.254	control-plane.minikube.internal$ /etc/hosts
	I0507 18:40:51.269933    8396 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.143.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0507 18:40:51.301623    8396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 18:40:51.499076    8396 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0507 18:40:51.528080    8396 host.go:66] Checking if "ha-210800" exists ...
	I0507 18:40:51.528504    8396 start.go:316] joinCluster: &{Name:ha-210800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 Clust
erName:ha-210800 Namespace:default APIServerHAVIP:172.19.143.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.132.69 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.135.87 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.19.137.224 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0507 18:40:51.528504    8396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0507 18:40:51.528504    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800 ).state
	I0507 18:40:53.434168    8396 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:40:53.434168    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:40:53.434554    8396 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800 ).networkadapters[0]).ipaddresses[0]
	I0507 18:40:55.770459    8396 main.go:141] libmachine: [stdout =====>] : 172.19.132.69
	
	I0507 18:40:55.770698    8396 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:40:55.770919    8396 sshutil.go:53] new ssh client: &{IP:172.19.132.69 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800\id_rsa Username:docker}
	I0507 18:40:55.971634    8396 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0": (4.4428301s)
	I0507 18:40:55.971634    8396 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:172.19.137.224 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0507 18:40:55.971758    8396 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token flxcjm.6wq1lewpqlhhlihd --discovery-token-ca-cert-hash sha256:931f752ca063cc161db9d00a66e1e235f9a673b9dc0e49228e9ec99d810de7b1 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-210800-m03 --control-plane --apiserver-advertise-address=172.19.137.224 --apiserver-bind-port=8443"
	I0507 18:41:38.061990    8396 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token flxcjm.6wq1lewpqlhhlihd --discovery-token-ca-cert-hash sha256:931f752ca063cc161db9d00a66e1e235f9a673b9dc0e49228e9ec99d810de7b1 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-210800-m03 --control-plane --apiserver-advertise-address=172.19.137.224 --apiserver-bind-port=8443": (42.0873459s)
	I0507 18:41:38.062089    8396 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0507 18:41:38.832012    8396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-210800-m03 minikube.k8s.io/updated_at=2024_05_07T18_41_38_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=a2bee053733709aad5480b65159f65519e411d9f minikube.k8s.io/name=ha-210800 minikube.k8s.io/primary=false
	I0507 18:41:38.989146    8396 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-210800-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0507 18:41:39.135248    8396 start.go:318] duration metric: took 47.603536s to joinCluster
	I0507 18:41:39.135411    8396 start.go:234] Will wait 6m0s for node &{Name:m03 IP:172.19.137.224 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0507 18:41:39.138156    8396 out.go:177] * Verifying Kubernetes components...
	I0507 18:41:39.135964    8396 config.go:182] Loaded profile config "ha-210800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 18:41:39.152937    8396 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 18:41:39.554495    8396 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0507 18:41:39.589409    8396 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0507 18:41:39.589993    8396 kapi.go:59] client config for ha-210800: &rest.Config{Host:"https://172.19.143.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-210800\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\ha-210800\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Ne
xtProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2655b00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0507 18:41:39.590112    8396 kubeadm.go:477] Overriding stale ClientConfig host https://172.19.143.254:8443 with https://172.19.132.69:8443
	I0507 18:41:39.590999    8396 node_ready.go:35] waiting up to 6m0s for node "ha-210800-m03" to be "Ready" ...
	I0507 18:41:39.591114    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m03
	I0507 18:41:39.591114    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:39.591114    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:39.591114    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:39.604518    8396 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0507 18:41:40.096282    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m03
	I0507 18:41:40.096282    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:40.096282    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:40.096282    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:40.100870    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:41:40.603147    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m03
	I0507 18:41:40.603147    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:40.603378    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:40.603378    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:40.607983    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:41:41.092775    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m03
	I0507 18:41:41.092775    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:41.092775    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:41.092775    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:41.096966    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:41:41.600332    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m03
	I0507 18:41:41.600332    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:41.600332    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:41.600332    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:41.926619    8396 round_trippers.go:574] Response Status: 200 OK in 326 milliseconds
	I0507 18:41:41.927477    8396 node_ready.go:53] node "ha-210800-m03" has status "Ready":"False"
	I0507 18:41:42.105509    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m03
	I0507 18:41:42.105509    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:42.105509    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:42.105509    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:42.109109    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:41:42.599004    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m03
	I0507 18:41:42.599212    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:42.599212    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:42.599212    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:42.603574    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:41:43.100149    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m03
	I0507 18:41:43.100182    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:43.100182    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:43.100239    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:43.104498    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:41:43.605678    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m03
	I0507 18:41:43.605678    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:43.605678    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:43.605678    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:43.608852    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:41:44.107775    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m03
	I0507 18:41:44.107775    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:44.107775    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:44.107866    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:44.112264    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:41:44.113716    8396 node_ready.go:53] node "ha-210800-m03" has status "Ready":"False"
	I0507 18:41:44.593419    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m03
	I0507 18:41:44.593506    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:44.593506    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:44.593506    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:44.598049    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:41:45.093613    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m03
	I0507 18:41:45.093613    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:45.093613    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:45.093613    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:45.100447    8396 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0507 18:41:45.595856    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m03
	I0507 18:41:45.595856    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:45.595856    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:45.595856    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:45.601132    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:41:46.094597    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m03
	I0507 18:41:46.094597    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:46.094851    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:46.094851    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:46.102663    8396 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0507 18:41:46.594334    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m03
	I0507 18:41:46.594414    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:46.594414    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:46.594488    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:46.599096    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:41:46.601260    8396 node_ready.go:53] node "ha-210800-m03" has status "Ready":"False"
	I0507 18:41:47.105267    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m03
	I0507 18:41:47.105267    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:47.105267    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:47.105267    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:47.112610    8396 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0507 18:41:47.117448    8396 node_ready.go:49] node "ha-210800-m03" has status "Ready":"True"
	I0507 18:41:47.117448    8396 node_ready.go:38] duration metric: took 7.5259064s for node "ha-210800-m03" to be "Ready" ...
	I0507 18:41:47.117448    8396 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0507 18:41:47.117559    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods
	I0507 18:41:47.117559    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:47.117559    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:47.117559    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:47.130014    8396 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0507 18:41:47.139989    8396 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-cr9nn" in "kube-system" namespace to be "Ready" ...
	I0507 18:41:47.139989    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-cr9nn
	I0507 18:41:47.139989    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:47.139989    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:47.139989    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:47.143631    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:41:47.144627    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800
	I0507 18:41:47.144627    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:47.144627    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:47.144627    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:47.147682    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:41:47.149118    8396 pod_ready.go:92] pod "coredns-7db6d8ff4d-cr9nn" in "kube-system" namespace has status "Ready":"True"
	I0507 18:41:47.149181    8396 pod_ready.go:81] duration metric: took 9.1916ms for pod "coredns-7db6d8ff4d-cr9nn" in "kube-system" namespace to be "Ready" ...
	I0507 18:41:47.149181    8396 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-dxsqf" in "kube-system" namespace to be "Ready" ...
	I0507 18:41:47.149273    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-dxsqf
	I0507 18:41:47.149273    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:47.149273    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:47.149273    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:47.152673    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:41:47.154372    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800
	I0507 18:41:47.154446    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:47.154446    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:47.154446    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:47.156687    8396 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 18:41:47.157709    8396 pod_ready.go:92] pod "coredns-7db6d8ff4d-dxsqf" in "kube-system" namespace has status "Ready":"True"
	I0507 18:41:47.157709    8396 pod_ready.go:81] duration metric: took 8.5277ms for pod "coredns-7db6d8ff4d-dxsqf" in "kube-system" namespace to be "Ready" ...
	I0507 18:41:47.157709    8396 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-210800" in "kube-system" namespace to be "Ready" ...
	I0507 18:41:47.157709    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800
	I0507 18:41:47.157709    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:47.157709    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:47.157709    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:47.160924    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:41:47.161992    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800
	I0507 18:41:47.162080    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:47.162080    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:47.162080    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:47.165155    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:41:47.166216    8396 pod_ready.go:92] pod "etcd-ha-210800" in "kube-system" namespace has status "Ready":"True"
	I0507 18:41:47.166216    8396 pod_ready.go:81] duration metric: took 8.5062ms for pod "etcd-ha-210800" in "kube-system" namespace to be "Ready" ...
	I0507 18:41:47.166216    8396 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-210800-m02" in "kube-system" namespace to be "Ready" ...
	I0507 18:41:47.166364    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m02
	I0507 18:41:47.166390    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:47.166423    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:47.166423    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:47.169597    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:41:47.170546    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:41:47.170546    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:47.170546    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:47.170546    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:47.173116    8396 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 18:41:47.175021    8396 pod_ready.go:92] pod "etcd-ha-210800-m02" in "kube-system" namespace has status "Ready":"True"
	I0507 18:41:47.175021    8396 pod_ready.go:81] duration metric: took 8.8048ms for pod "etcd-ha-210800-m02" in "kube-system" namespace to be "Ready" ...
	I0507 18:41:47.175021    8396 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-210800-m03" in "kube-system" namespace to be "Ready" ...
	I0507 18:41:47.311361    8396 request.go:629] Waited for 136.0223ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m03
	I0507 18:41:47.311493    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m03
	I0507 18:41:47.311493    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:47.311493    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:47.311493    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:47.316142    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:41:47.514704    8396 request.go:629] Waited for 197.4328ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/nodes/ha-210800-m03
	I0507 18:41:47.514763    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m03
	I0507 18:41:47.514763    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:47.514763    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:47.514763    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:47.533529    8396 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0507 18:41:47.705769    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m03
	I0507 18:41:47.705769    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:47.705769    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:47.705769    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:47.712867    8396 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0507 18:41:47.910114    8396 request.go:629] Waited for 196.522ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/nodes/ha-210800-m03
	I0507 18:41:47.910486    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m03
	I0507 18:41:47.910486    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:47.910486    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:47.910486    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:47.914654    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:41:48.176864    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m03
	I0507 18:41:48.176946    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:48.176946    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:48.176946    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:48.180277    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:41:48.317137    8396 request.go:629] Waited for 134.6006ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/nodes/ha-210800-m03
	I0507 18:41:48.317137    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m03
	I0507 18:41:48.317137    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:48.317137    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:48.317137    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:48.322243    8396 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 18:41:48.677103    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m03
	I0507 18:41:48.677163    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:48.677163    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:48.677163    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:48.682016    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:41:48.707532    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m03
	I0507 18:41:48.707532    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:48.707532    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:48.707532    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:48.712121    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:41:49.189713    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m03
	I0507 18:41:49.189713    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:49.190176    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:49.190248    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:49.197691    8396 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0507 18:41:49.199069    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m03
	I0507 18:41:49.199069    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:49.199069    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:49.199069    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:49.202665    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:41:49.203682    8396 pod_ready.go:102] pod "etcd-ha-210800-m03" in "kube-system" namespace has status "Ready":"False"
	I0507 18:41:49.686023    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m03
	I0507 18:41:49.686023    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:49.686023    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:49.686023    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:49.693907    8396 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0507 18:41:49.695045    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m03
	I0507 18:41:49.695045    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:49.695045    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:49.695045    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:49.698638    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:41:50.186057    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m03
	I0507 18:41:50.186057    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:50.186057    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:50.186057    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:50.190880    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:41:50.192343    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m03
	I0507 18:41:50.192452    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:50.192452    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:50.192452    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:50.195728    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:41:50.687659    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/etcd-ha-210800-m03
	I0507 18:41:50.687732    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:50.687732    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:50.687732    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:50.691987    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:41:50.693441    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m03
	I0507 18:41:50.693503    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:50.693503    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:50.693503    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:50.697288    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:41:50.698308    8396 pod_ready.go:92] pod "etcd-ha-210800-m03" in "kube-system" namespace has status "Ready":"True"
	I0507 18:41:50.698308    8396 pod_ready.go:81] duration metric: took 3.5230501s for pod "etcd-ha-210800-m03" in "kube-system" namespace to be "Ready" ...
	I0507 18:41:50.698308    8396 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-210800" in "kube-system" namespace to be "Ready" ...
	I0507 18:41:50.698411    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-210800
	I0507 18:41:50.698474    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:50.698474    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:50.698474    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:50.701617    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:41:50.718370    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800
	I0507 18:41:50.718370    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:50.718370    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:50.718370    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:50.721617    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:41:50.723011    8396 pod_ready.go:92] pod "kube-apiserver-ha-210800" in "kube-system" namespace has status "Ready":"True"
	I0507 18:41:50.723011    8396 pod_ready.go:81] duration metric: took 24.7015ms for pod "kube-apiserver-ha-210800" in "kube-system" namespace to be "Ready" ...
	I0507 18:41:50.723011    8396 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-210800-m02" in "kube-system" namespace to be "Ready" ...
	I0507 18:41:50.908454    8396 request.go:629] Waited for 185.4305ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-210800-m02
	I0507 18:41:50.908681    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-210800-m02
	I0507 18:41:50.908780    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:50.908780    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:50.908780    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:50.912536    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:41:51.113412    8396 request.go:629] Waited for 199.3029ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:41:51.113412    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:41:51.113412    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:51.113412    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:51.113775    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:51.123760    8396 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0507 18:41:51.124421    8396 pod_ready.go:92] pod "kube-apiserver-ha-210800-m02" in "kube-system" namespace has status "Ready":"True"
	I0507 18:41:51.124421    8396 pod_ready.go:81] duration metric: took 401.3827ms for pod "kube-apiserver-ha-210800-m02" in "kube-system" namespace to be "Ready" ...
	I0507 18:41:51.124491    8396 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-210800-m03" in "kube-system" namespace to be "Ready" ...
	I0507 18:41:51.316105    8396 request.go:629] Waited for 191.6013ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-210800-m03
	I0507 18:41:51.316432    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-210800-m03
	I0507 18:41:51.316824    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:51.316824    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:51.316824    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:51.322053    8396 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 18:41:51.517655    8396 request.go:629] Waited for 194.4659ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/nodes/ha-210800-m03
	I0507 18:41:51.518066    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m03
	I0507 18:41:51.518066    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:51.518066    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:51.518066    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:51.522328    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:41:51.523456    8396 pod_ready.go:92] pod "kube-apiserver-ha-210800-m03" in "kube-system" namespace has status "Ready":"True"
	I0507 18:41:51.523557    8396 pod_ready.go:81] duration metric: took 399.0393ms for pod "kube-apiserver-ha-210800-m03" in "kube-system" namespace to be "Ready" ...
	I0507 18:41:51.523557    8396 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-210800" in "kube-system" namespace to be "Ready" ...
	I0507 18:41:51.707969    8396 request.go:629] Waited for 183.9757ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-210800
	I0507 18:41:51.708082    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-210800
	I0507 18:41:51.708082    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:51.708082    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:51.708184    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:51.714738    8396 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0507 18:41:51.910804    8396 request.go:629] Waited for 194.7891ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/nodes/ha-210800
	I0507 18:41:51.911125    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800
	I0507 18:41:51.911125    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:51.911125    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:51.911125    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:51.917487    8396 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0507 18:41:51.922939    8396 pod_ready.go:92] pod "kube-controller-manager-ha-210800" in "kube-system" namespace has status "Ready":"True"
	I0507 18:41:51.922939    8396 pod_ready.go:81] duration metric: took 399.3552ms for pod "kube-controller-manager-ha-210800" in "kube-system" namespace to be "Ready" ...
	I0507 18:41:51.922939    8396 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-210800-m02" in "kube-system" namespace to be "Ready" ...
	I0507 18:41:52.114748    8396 request.go:629] Waited for 191.7963ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-210800-m02
	I0507 18:41:52.115066    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-210800-m02
	I0507 18:41:52.115066    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:52.115066    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:52.115133    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:52.119071    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:41:52.316381    8396 request.go:629] Waited for 195.6783ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:41:52.316629    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:41:52.316629    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:52.316629    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:52.316728    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:52.323042    8396 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0507 18:41:52.323988    8396 pod_ready.go:92] pod "kube-controller-manager-ha-210800-m02" in "kube-system" namespace has status "Ready":"True"
	I0507 18:41:52.323988    8396 pod_ready.go:81] duration metric: took 401.0225ms for pod "kube-controller-manager-ha-210800-m02" in "kube-system" namespace to be "Ready" ...
	I0507 18:41:52.323988    8396 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-210800-m03" in "kube-system" namespace to be "Ready" ...
	I0507 18:41:52.520424    8396 request.go:629] Waited for 196.4226ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-210800-m03
	I0507 18:41:52.520424    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-210800-m03
	I0507 18:41:52.520424    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:52.520424    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:52.520424    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:52.523692    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:41:52.710160    8396 request.go:629] Waited for 185.0954ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/nodes/ha-210800-m03
	I0507 18:41:52.710160    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m03
	I0507 18:41:52.710160    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:52.710408    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:52.710408    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:52.718264    8396 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0507 18:41:52.719196    8396 pod_ready.go:92] pod "kube-controller-manager-ha-210800-m03" in "kube-system" namespace has status "Ready":"True"
	I0507 18:41:52.719196    8396 pod_ready.go:81] duration metric: took 395.1817ms for pod "kube-controller-manager-ha-210800-m03" in "kube-system" namespace to be "Ready" ...
	I0507 18:41:52.719196    8396 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6qdqt" in "kube-system" namespace to be "Ready" ...
	I0507 18:41:52.912493    8396 request.go:629] Waited for 193.0358ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6qdqt
	I0507 18:41:52.912626    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6qdqt
	I0507 18:41:52.912708    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:52.912770    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:52.912770    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:52.915821    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:41:53.118300    8396 request.go:629] Waited for 199.2201ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/nodes/ha-210800
	I0507 18:41:53.118648    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800
	I0507 18:41:53.118648    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:53.118648    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:53.118835    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:53.123047    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:41:53.123047    8396 pod_ready.go:92] pod "kube-proxy-6qdqt" in "kube-system" namespace has status "Ready":"True"
	I0507 18:41:53.123047    8396 pod_ready.go:81] duration metric: took 403.8238ms for pod "kube-proxy-6qdqt" in "kube-system" namespace to be "Ready" ...
	I0507 18:41:53.123047    8396 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-rshfg" in "kube-system" namespace to be "Ready" ...
	I0507 18:41:53.306187    8396 request.go:629] Waited for 182.1207ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rshfg
	I0507 18:41:53.306381    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-rshfg
	I0507 18:41:53.306381    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:53.306381    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:53.306442    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:53.310606    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:41:53.507869    8396 request.go:629] Waited for 196.1929ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:41:53.507869    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:41:53.507869    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:53.507869    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:53.507869    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:53.512291    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:41:53.513491    8396 pod_ready.go:92] pod "kube-proxy-rshfg" in "kube-system" namespace has status "Ready":"True"
	I0507 18:41:53.513491    8396 pod_ready.go:81] duration metric: took 390.4174ms for pod "kube-proxy-rshfg" in "kube-system" namespace to be "Ready" ...
	I0507 18:41:53.513622    8396 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-tnxck" in "kube-system" namespace to be "Ready" ...
	I0507 18:41:53.712924    8396 request.go:629] Waited for 199.148ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tnxck
	I0507 18:41:53.712924    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/kube-proxy-tnxck
	I0507 18:41:53.713138    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:53.713138    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:53.713138    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:53.722160    8396 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0507 18:41:53.916470    8396 request.go:629] Waited for 193.4441ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/nodes/ha-210800-m03
	I0507 18:41:53.916470    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m03
	I0507 18:41:53.916601    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:53.916601    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:53.916769    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:53.930262    8396 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0507 18:41:53.931311    8396 pod_ready.go:92] pod "kube-proxy-tnxck" in "kube-system" namespace has status "Ready":"True"
	I0507 18:41:53.931311    8396 pod_ready.go:81] duration metric: took 417.6616ms for pod "kube-proxy-tnxck" in "kube-system" namespace to be "Ready" ...
	I0507 18:41:53.931311    8396 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-210800" in "kube-system" namespace to be "Ready" ...
	I0507 18:41:54.117498    8396 request.go:629] Waited for 186.0931ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-210800
	I0507 18:41:54.117498    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-210800
	I0507 18:41:54.117498    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:54.117498    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:54.117498    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:54.121150    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:41:54.320080    8396 request.go:629] Waited for 197.7954ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/nodes/ha-210800
	I0507 18:41:54.320649    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800
	I0507 18:41:54.320649    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:54.320649    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:54.320649    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:54.324235    8396 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 18:41:54.326147    8396 pod_ready.go:92] pod "kube-scheduler-ha-210800" in "kube-system" namespace has status "Ready":"True"
	I0507 18:41:54.326256    8396 pod_ready.go:81] duration metric: took 394.918ms for pod "kube-scheduler-ha-210800" in "kube-system" namespace to be "Ready" ...
	I0507 18:41:54.326256    8396 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-210800-m02" in "kube-system" namespace to be "Ready" ...
	I0507 18:41:54.507519    8396 request.go:629] Waited for 181.0734ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-210800-m02
	I0507 18:41:54.507519    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-210800-m02
	I0507 18:41:54.507736    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:54.507736    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:54.507736    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:54.511796    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:41:54.709113    8396 request.go:629] Waited for 196.0884ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:41:54.709315    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m02
	I0507 18:41:54.709315    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:54.709315    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:54.709315    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:54.713965    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:41:54.715132    8396 pod_ready.go:92] pod "kube-scheduler-ha-210800-m02" in "kube-system" namespace has status "Ready":"True"
	I0507 18:41:54.715193    8396 pod_ready.go:81] duration metric: took 388.8499ms for pod "kube-scheduler-ha-210800-m02" in "kube-system" namespace to be "Ready" ...
	I0507 18:41:54.715193    8396 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-210800-m03" in "kube-system" namespace to be "Ready" ...
	I0507 18:41:54.913415    8396 request.go:629] Waited for 198.1384ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-210800-m03
	I0507 18:41:54.913912    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-210800-m03
	I0507 18:41:54.913997    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:54.913997    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:54.913997    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:54.918031    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:41:55.117008    8396 request.go:629] Waited for 197.3951ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/nodes/ha-210800-m03
	I0507 18:41:55.117212    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes/ha-210800-m03
	I0507 18:41:55.117212    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:55.117212    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:55.117212    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:55.121870    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:41:55.122810    8396 pod_ready.go:92] pod "kube-scheduler-ha-210800-m03" in "kube-system" namespace has status "Ready":"True"
	I0507 18:41:55.122810    8396 pod_ready.go:81] duration metric: took 407.5901ms for pod "kube-scheduler-ha-210800-m03" in "kube-system" namespace to be "Ready" ...
	I0507 18:41:55.122810    8396 pod_ready.go:38] duration metric: took 8.0047384s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0507 18:41:55.122810    8396 api_server.go:52] waiting for apiserver process to appear ...
	I0507 18:41:55.132388    8396 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0507 18:41:55.156016    8396 api_server.go:72] duration metric: took 16.019453s to wait for apiserver process to appear ...
	I0507 18:41:55.156016    8396 api_server.go:88] waiting for apiserver healthz status ...
	I0507 18:41:55.156016    8396 api_server.go:253] Checking apiserver healthz at https://172.19.132.69:8443/healthz ...
	I0507 18:41:55.164759    8396 api_server.go:279] https://172.19.132.69:8443/healthz returned 200:
	ok
	I0507 18:41:55.165066    8396 round_trippers.go:463] GET https://172.19.132.69:8443/version
	I0507 18:41:55.165066    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:55.165066    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:55.165066    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:55.165771    8396 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0507 18:41:55.166814    8396 api_server.go:141] control plane version: v1.30.0
	I0507 18:41:55.166814    8396 api_server.go:131] duration metric: took 10.7972ms to wait for apiserver health ...
	I0507 18:41:55.166814    8396 system_pods.go:43] waiting for kube-system pods to appear ...
	I0507 18:41:55.319395    8396 request.go:629] Waited for 152.4587ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods
	I0507 18:41:55.319588    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods
	I0507 18:41:55.319588    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:55.322301    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:55.322301    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:55.329097    8396 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0507 18:41:55.338462    8396 system_pods.go:59] 24 kube-system pods found
	I0507 18:41:55.338579    8396 system_pods.go:61] "coredns-7db6d8ff4d-cr9nn" [24c45106-2ef4-4932-ae5d-549fb0177b13] Running
	I0507 18:41:55.338579    8396 system_pods.go:61] "coredns-7db6d8ff4d-dxsqf" [d32c637e-c641-4ef7-b2ed-b6449fe7d50f] Running
	I0507 18:41:55.338579    8396 system_pods.go:61] "etcd-ha-210800" [6888d4a2-b10e-4329-b3de-90fc4bb053f3] Running
	I0507 18:41:55.338579    8396 system_pods.go:61] "etcd-ha-210800-m02" [97f10401-7c02-421d-abe4-2b9f37dd3f39] Running
	I0507 18:41:55.338579    8396 system_pods.go:61] "etcd-ha-210800-m03" [5f8c792a-5610-476c-b0b2-3016b3b63926] Running
	I0507 18:41:55.338579    8396 system_pods.go:61] "kindnet-57g8k" [6067a407-ee57-44ab-9591-9217deded72a] Running
	I0507 18:41:55.338579    8396 system_pods.go:61] "kindnet-6xzk7" [313799a0-9188-4c07-817c-e46c98c84eb6] Running
	I0507 18:41:55.338579    8396 system_pods.go:61] "kindnet-whrqx" [ded04b26-3100-453a-9c0f-0a7cced93180] Running
	I0507 18:41:55.338579    8396 system_pods.go:61] "kube-apiserver-ha-210800" [74b614eb-d1ef-4707-b1a9-faeb68a9abf4] Running
	I0507 18:41:55.338579    8396 system_pods.go:61] "kube-apiserver-ha-210800-m02" [3399e7eb-50f0-49a6-9dbe-1d5964e62a63] Running
	I0507 18:41:55.338579    8396 system_pods.go:61] "kube-apiserver-ha-210800-m03" [e3215a44-5844-4caa-abb7-8acd94b221ad] Running
	I0507 18:41:55.338579    8396 system_pods.go:61] "kube-controller-manager-ha-210800" [9d31f6b7-c758-4599-9087-d38a0f929769] Running
	I0507 18:41:55.338579    8396 system_pods.go:61] "kube-controller-manager-ha-210800-m02" [e20ed11b-7d94-407a-a1cb-0440b3b29eb9] Running
	I0507 18:41:55.338579    8396 system_pods.go:61] "kube-controller-manager-ha-210800-m03" [ff82d94b-b3f9-484c-ab24-aa37c6243cf7] Running
	I0507 18:41:55.338579    8396 system_pods.go:61] "kube-proxy-6qdqt" [83aff3e5-b08d-4b7e-8dc2-c2fd1fd9bec7] Running
	I0507 18:41:55.338579    8396 system_pods.go:61] "kube-proxy-rshfg" [2ce7075a-2b4a-4e31-80bf-7de27797a8d6] Running
	I0507 18:41:55.338579    8396 system_pods.go:61] "kube-proxy-tnxck" [8cc3ed39-c2bd-4139-9ff6-1cbc0c210b5f] Running
	I0507 18:41:55.338579    8396 system_pods.go:61] "kube-scheduler-ha-210800" [37fbafc0-eae6-407e-8b45-9c0181aca8dc] Running
	I0507 18:41:55.338579    8396 system_pods.go:61] "kube-scheduler-ha-210800-m02" [51a4f5d3-0f41-4420-87ce-5ac44bb93e3c] Running
	I0507 18:41:55.338579    8396 system_pods.go:61] "kube-scheduler-ha-210800-m03" [b6a0dd6e-e43f-40d1-a56b-841269b3e8a4] Running
	I0507 18:41:55.338579    8396 system_pods.go:61] "kube-vip-ha-210800" [b1216eb2-830b-4756-97c6-a35d5e74c718] Running
	I0507 18:41:55.338579    8396 system_pods.go:61] "kube-vip-ha-210800-m02" [ff2f83aa-9bdb-4dfc-98bf-d632984ef52d] Running
	I0507 18:41:55.338579    8396 system_pods.go:61] "kube-vip-ha-210800-m03" [12dde05a-34a8-4d68-9c37-3c5398b5f146] Running
	I0507 18:41:55.338579    8396 system_pods.go:61] "storage-provisioner" [f05f26ec-1ebd-4111-adc5-825fc75a414d] Running
	I0507 18:41:55.338579    8396 system_pods.go:74] duration metric: took 171.7541ms to wait for pod list to return data ...
	I0507 18:41:55.338579    8396 default_sa.go:34] waiting for default service account to be created ...
	I0507 18:41:55.521253    8396 request.go:629] Waited for 182.6614ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/namespaces/default/serviceaccounts
	I0507 18:41:55.521253    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/default/serviceaccounts
	I0507 18:41:55.521253    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:55.521253    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:55.521253    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:55.525994    8396 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 18:41:55.526289    8396 default_sa.go:45] found service account: "default"
	I0507 18:41:55.526289    8396 default_sa.go:55] duration metric: took 187.697ms for default service account to be created ...
	I0507 18:41:55.526289    8396 system_pods.go:116] waiting for k8s-apps to be running ...
	I0507 18:41:55.708078    8396 request.go:629] Waited for 181.5099ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods
	I0507 18:41:55.708078    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/namespaces/kube-system/pods
	I0507 18:41:55.708078    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:55.708078    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:55.708078    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:55.723085    8396 round_trippers.go:574] Response Status: 200 OK in 15 milliseconds
	I0507 18:41:55.733628    8396 system_pods.go:86] 24 kube-system pods found
	I0507 18:41:55.733628    8396 system_pods.go:89] "coredns-7db6d8ff4d-cr9nn" [24c45106-2ef4-4932-ae5d-549fb0177b13] Running
	I0507 18:41:55.733685    8396 system_pods.go:89] "coredns-7db6d8ff4d-dxsqf" [d32c637e-c641-4ef7-b2ed-b6449fe7d50f] Running
	I0507 18:41:55.733685    8396 system_pods.go:89] "etcd-ha-210800" [6888d4a2-b10e-4329-b3de-90fc4bb053f3] Running
	I0507 18:41:55.733685    8396 system_pods.go:89] "etcd-ha-210800-m02" [97f10401-7c02-421d-abe4-2b9f37dd3f39] Running
	I0507 18:41:55.733685    8396 system_pods.go:89] "etcd-ha-210800-m03" [5f8c792a-5610-476c-b0b2-3016b3b63926] Running
	I0507 18:41:55.733685    8396 system_pods.go:89] "kindnet-57g8k" [6067a407-ee57-44ab-9591-9217deded72a] Running
	I0507 18:41:55.733685    8396 system_pods.go:89] "kindnet-6xzk7" [313799a0-9188-4c07-817c-e46c98c84eb6] Running
	I0507 18:41:55.733685    8396 system_pods.go:89] "kindnet-whrqx" [ded04b26-3100-453a-9c0f-0a7cced93180] Running
	I0507 18:41:55.733685    8396 system_pods.go:89] "kube-apiserver-ha-210800" [74b614eb-d1ef-4707-b1a9-faeb68a9abf4] Running
	I0507 18:41:55.733685    8396 system_pods.go:89] "kube-apiserver-ha-210800-m02" [3399e7eb-50f0-49a6-9dbe-1d5964e62a63] Running
	I0507 18:41:55.733685    8396 system_pods.go:89] "kube-apiserver-ha-210800-m03" [e3215a44-5844-4caa-abb7-8acd94b221ad] Running
	I0507 18:41:55.733685    8396 system_pods.go:89] "kube-controller-manager-ha-210800" [9d31f6b7-c758-4599-9087-d38a0f929769] Running
	I0507 18:41:55.733685    8396 system_pods.go:89] "kube-controller-manager-ha-210800-m02" [e20ed11b-7d94-407a-a1cb-0440b3b29eb9] Running
	I0507 18:41:55.733685    8396 system_pods.go:89] "kube-controller-manager-ha-210800-m03" [ff82d94b-b3f9-484c-ab24-aa37c6243cf7] Running
	I0507 18:41:55.733685    8396 system_pods.go:89] "kube-proxy-6qdqt" [83aff3e5-b08d-4b7e-8dc2-c2fd1fd9bec7] Running
	I0507 18:41:55.733685    8396 system_pods.go:89] "kube-proxy-rshfg" [2ce7075a-2b4a-4e31-80bf-7de27797a8d6] Running
	I0507 18:41:55.733685    8396 system_pods.go:89] "kube-proxy-tnxck" [8cc3ed39-c2bd-4139-9ff6-1cbc0c210b5f] Running
	I0507 18:41:55.733685    8396 system_pods.go:89] "kube-scheduler-ha-210800" [37fbafc0-eae6-407e-8b45-9c0181aca8dc] Running
	I0507 18:41:55.733685    8396 system_pods.go:89] "kube-scheduler-ha-210800-m02" [51a4f5d3-0f41-4420-87ce-5ac44bb93e3c] Running
	I0507 18:41:55.733685    8396 system_pods.go:89] "kube-scheduler-ha-210800-m03" [b6a0dd6e-e43f-40d1-a56b-841269b3e8a4] Running
	I0507 18:41:55.733685    8396 system_pods.go:89] "kube-vip-ha-210800" [b1216eb2-830b-4756-97c6-a35d5e74c718] Running
	I0507 18:41:55.733685    8396 system_pods.go:89] "kube-vip-ha-210800-m02" [ff2f83aa-9bdb-4dfc-98bf-d632984ef52d] Running
	I0507 18:41:55.733685    8396 system_pods.go:89] "kube-vip-ha-210800-m03" [12dde05a-34a8-4d68-9c37-3c5398b5f146] Running
	I0507 18:41:55.733685    8396 system_pods.go:89] "storage-provisioner" [f05f26ec-1ebd-4111-adc5-825fc75a414d] Running
	I0507 18:41:55.733685    8396 system_pods.go:126] duration metric: took 207.3825ms to wait for k8s-apps to be running ...
	I0507 18:41:55.733685    8396 system_svc.go:44] waiting for kubelet service to be running ....
	I0507 18:41:55.741571    8396 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0507 18:41:55.765567    8396 system_svc.go:56] duration metric: took 31.8799ms WaitForService to wait for kubelet
	I0507 18:41:55.765567    8396 kubeadm.go:576] duration metric: took 16.6289637s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0507 18:41:55.765894    8396 node_conditions.go:102] verifying NodePressure condition ...
	I0507 18:41:55.911291    8396 request.go:629] Waited for 145.3374ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.132.69:8443/api/v1/nodes
	I0507 18:41:55.911625    8396 round_trippers.go:463] GET https://172.19.132.69:8443/api/v1/nodes
	I0507 18:41:55.911819    8396 round_trippers.go:469] Request Headers:
	I0507 18:41:55.911819    8396 round_trippers.go:473]     Accept: application/json, */*
	I0507 18:41:55.911819    8396 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 18:41:55.917283    8396 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 18:41:55.919641    8396 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0507 18:41:55.919699    8396 node_conditions.go:123] node cpu capacity is 2
	I0507 18:41:55.919699    8396 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0507 18:41:55.919757    8396 node_conditions.go:123] node cpu capacity is 2
	I0507 18:41:55.919757    8396 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0507 18:41:55.919757    8396 node_conditions.go:123] node cpu capacity is 2
	I0507 18:41:55.919757    8396 node_conditions.go:105] duration metric: took 153.8531ms to run NodePressure ...
	I0507 18:41:55.919757    8396 start.go:240] waiting for startup goroutines ...
	I0507 18:41:55.919823    8396 start.go:254] writing updated cluster config ...
	I0507 18:41:55.927927    8396 ssh_runner.go:195] Run: rm -f paused
	I0507 18:41:56.049806    8396 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0507 18:41:56.053257    8396 out.go:177] * Done! kubectl is now configured to use "ha-210800" cluster and "default" namespace by default
	
	
	==> Docker <==
	May 07 18:34:57 ha-210800 dockerd[1330]: time="2024-05-07T18:34:57.948411648Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 07 18:34:57 ha-210800 dockerd[1330]: time="2024-05-07T18:34:57.948429949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 07 18:34:57 ha-210800 dockerd[1330]: time="2024-05-07T18:34:57.948563853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 07 18:34:58 ha-210800 dockerd[1330]: time="2024-05-07T18:34:58.003862041Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 07 18:34:58 ha-210800 dockerd[1330]: time="2024-05-07T18:34:58.003982751Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 07 18:34:58 ha-210800 dockerd[1330]: time="2024-05-07T18:34:58.004038755Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 07 18:34:58 ha-210800 dockerd[1330]: time="2024-05-07T18:34:58.004229571Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 07 18:42:30 ha-210800 dockerd[1330]: time="2024-05-07T18:42:30.186419392Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 07 18:42:30 ha-210800 dockerd[1330]: time="2024-05-07T18:42:30.186614715Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 07 18:42:30 ha-210800 dockerd[1330]: time="2024-05-07T18:42:30.186652920Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 07 18:42:30 ha-210800 dockerd[1330]: time="2024-05-07T18:42:30.188673163Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 07 18:42:30 ha-210800 cri-dockerd[1230]: time="2024-05-07T18:42:30Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b1d6504330bebeae4eeeff81fa941452b6f3245a3a80aa39f24526d7a0989f57/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	May 07 18:42:31 ha-210800 cri-dockerd[1230]: time="2024-05-07T18:42:31Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	May 07 18:42:31 ha-210800 dockerd[1330]: time="2024-05-07T18:42:31.842486781Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 07 18:42:31 ha-210800 dockerd[1330]: time="2024-05-07T18:42:31.843265536Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 07 18:42:31 ha-210800 dockerd[1330]: time="2024-05-07T18:42:31.843375543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 07 18:42:31 ha-210800 dockerd[1330]: time="2024-05-07T18:42:31.843817375Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 07 18:43:30 ha-210800 dockerd[1324]: 2024/05/07 18:43:30 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 07 18:43:30 ha-210800 dockerd[1324]: 2024/05/07 18:43:30 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 07 18:43:30 ha-210800 dockerd[1324]: 2024/05/07 18:43:30 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 07 18:43:30 ha-210800 dockerd[1324]: 2024/05/07 18:43:30 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 07 18:43:30 ha-210800 dockerd[1324]: 2024/05/07 18:43:30 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 07 18:43:30 ha-210800 dockerd[1324]: 2024/05/07 18:43:30 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 07 18:43:30 ha-210800 dockerd[1324]: 2024/05/07 18:43:30 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 07 18:43:31 ha-210800 dockerd[1324]: 2024/05/07 18:43:31 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	f8b94835b1deb       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   20 minutes ago      Running             busybox                   0                   b1d6504330beb       busybox-fc5497c4f-pkgxl
	f09de8b01ca58       cbb01a7bd410d                                                                                         27 minutes ago      Running             coredns                   0                   c11b861ad1aeb       coredns-7db6d8ff4d-cr9nn
	a77f029cbd2de       cbb01a7bd410d                                                                                         27 minutes ago      Running             coredns                   0                   9e9fb991e5a5a       coredns-7db6d8ff4d-dxsqf
	2ac532428458f       6e38f40d628db                                                                                         27 minutes ago      Running             storage-provisioner       0                   a65cae5cd54a4       storage-provisioner
	3dcbef7bd0b66       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              27 minutes ago      Running             kindnet-cni               0                   14b94a1625979       kindnet-whrqx
	b876902be49e2       a0bf559e280cf                                                                                         28 minutes ago      Running             kube-proxy                0                   4313824e7fd6c       kube-proxy-6qdqt
	18ea360a18fd6       ghcr.io/kube-vip/kube-vip@sha256:82698885b3b5f926cd940b7000549f3d43850cb6565a708162900c1475a83016     28 minutes ago      Running             kube-vip                  0                   73b333b99ce9e       kube-vip-ha-210800
	4fc364eaa2527       3861cfcd7c04c                                                                                         28 minutes ago      Running             etcd                      0                   ec0441a1413ba       etcd-ha-210800
	c22f717c4b95d       c42f13656d0b2                                                                                         28 minutes ago      Running             kube-apiserver            0                   818b2dd2ca6f4       kube-apiserver-ha-210800
	74353e51a6877       259c8277fcbbc                                                                                         28 minutes ago      Running             kube-scheduler            0                   bc9c4b58404e6       kube-scheduler-ha-210800
	cf981f1729cd7       c7aad43836fa5                                                                                         28 minutes ago      Running             kube-controller-manager   0                   d326bdf8575cd       kube-controller-manager-ha-210800
	
	
	==> coredns [a77f029cbd2d] <==
	[INFO] 10.244.2.2:53294 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.001342295s
	[INFO] 10.244.2.2:55335 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.087816092s
	[INFO] 10.244.0.4:34617 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.000168111s
	[INFO] 10.244.1.2:33633 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000177012s
	[INFO] 10.244.1.2:60462 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000116308s
	[INFO] 10.244.1.2:51078 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.024472724s
	[INFO] 10.244.1.2:54231 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000107308s
	[INFO] 10.244.2.2:33146 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000300921s
	[INFO] 10.244.2.2:58735 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.022066754s
	[INFO] 10.244.2.2:33872 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000063805s
	[INFO] 10.244.0.4:54683 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000231416s
	[INFO] 10.244.0.4:58329 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000112108s
	[INFO] 10.244.0.4:45568 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000201414s
	[INFO] 10.244.0.4:49397 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000238817s
	[INFO] 10.244.0.4:38120 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00013901s
	[INFO] 10.244.0.4:51207 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000160111s
	[INFO] 10.244.1.2:49813 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000163411s
	[INFO] 10.244.1.2:56905 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000116908s
	[INFO] 10.244.2.2:33150 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000074005s
	[INFO] 10.244.2.2:50679 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000055104s
	[INFO] 10.244.0.4:42344 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000199314s
	[INFO] 10.244.0.4:52324 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000130709s
	[INFO] 10.244.2.2:38390 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000097507s
	[INFO] 10.244.0.4:49226 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000101007s
	[INFO] 10.244.0.4:43530 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00027882s
	
	
	==> coredns [f09de8b01ca5] <==
	[INFO] 10.244.1.2:55132 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000168312s
	[INFO] 10.244.1.2:55132 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000128209s
	[INFO] 10.244.1.2:46721 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000094507s
	[INFO] 10.244.2.2:45292 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00013741s
	[INFO] 10.244.2.2:55232 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000115408s
	[INFO] 10.244.2.2:55636 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000198514s
	[INFO] 10.244.2.2:42347 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000316422s
	[INFO] 10.244.2.2:42047 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000067204s
	[INFO] 10.244.0.4:40064 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00014381s
	[INFO] 10.244.0.4:45487 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000071905s
	[INFO] 10.244.1.2:56546 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000171012s
	[INFO] 10.244.1.2:52521 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000191613s
	[INFO] 10.244.2.2:58214 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00015391s
	[INFO] 10.244.2.2:36361 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000163311s
	[INFO] 10.244.0.4:35616 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.00013741s
	[INFO] 10.244.0.4:50859 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000193914s
	[INFO] 10.244.1.2:36175 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000144411s
	[INFO] 10.244.1.2:55812 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000189113s
	[INFO] 10.244.1.2:46867 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000253918s
	[INFO] 10.244.1.2:35616 - 5 "PTR IN 1.128.19.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000099707s
	[INFO] 10.244.2.2:50751 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000231816s
	[INFO] 10.244.2.2:47535 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000065805s
	[INFO] 10.244.2.2:59367 - 5 "PTR IN 1.128.19.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000137909s
	[INFO] 10.244.0.4:41079 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000091106s
	[INFO] 10.244.0.4:42737 - 5 "PTR IN 1.128.19.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000078005s
	
	
	==> describe nodes <==
	Name:               ha-210800
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-210800
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a2bee053733709aad5480b65159f65519e411d9f
	                    minikube.k8s.io/name=ha-210800
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_07T18_34_32_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 07 May 2024 18:34:29 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-210800
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 07 May 2024 19:02:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 07 May 2024 18:58:30 +0000   Tue, 07 May 2024 18:34:29 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 07 May 2024 18:58:30 +0000   Tue, 07 May 2024 18:34:29 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 07 May 2024 18:58:30 +0000   Tue, 07 May 2024 18:34:29 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 07 May 2024 18:58:30 +0000   Tue, 07 May 2024 18:34:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.132.69
	  Hostname:    ha-210800
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 3762c80c825f49a3ae881c2a62f2f1d9
	  System UUID:                30a5d089-0cbf-a64e-9e54-7723c068114e
	  Boot ID:                    89e3cf68-dc62-4793-b3a7-44a759255eb8
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-pkgxl              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 coredns-7db6d8ff4d-cr9nn             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                 coredns-7db6d8ff4d-dxsqf             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                 etcd-ha-210800                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 kindnet-whrqx                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      28m
	  kube-system                 kube-apiserver-ha-210800             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-controller-manager-ha-210800    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-proxy-6qdqt                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-scheduler-ha-210800             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-vip-ha-210800                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 28m   kube-proxy       
	  Normal  Starting                 28m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  28m   kubelet          Node ha-210800 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m   kubelet          Node ha-210800 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m   kubelet          Node ha-210800 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  28m   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           28m   node-controller  Node ha-210800 event: Registered Node ha-210800 in Controller
	  Normal  NodeReady                27m   kubelet          Node ha-210800 status is now: NodeReady
	  Normal  RegisteredNode           24m   node-controller  Node ha-210800 event: Registered Node ha-210800 in Controller
	  Normal  RegisteredNode           20m   node-controller  Node ha-210800 event: Registered Node ha-210800 in Controller
	
	
	Name:               ha-210800-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-210800-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a2bee053733709aad5480b65159f65519e411d9f
	                    minikube.k8s.io/name=ha-210800
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_07T18_38_06_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 07 May 2024 18:38:01 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-210800-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 07 May 2024 19:02:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 07 May 2024 19:00:42 +0000   Tue, 07 May 2024 19:00:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 07 May 2024 19:00:42 +0000   Tue, 07 May 2024 19:00:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 07 May 2024 19:00:42 +0000   Tue, 07 May 2024 19:00:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 07 May 2024 19:00:42 +0000   Tue, 07 May 2024 19:00:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.143.44
	  Hostname:    ha-210800-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 35ab3c12047f4452a9cabeaa4b66c331
	  System UUID:                2d5aaff5-a686-984b-8ed1-ccbdc90fbe68
	  Boot ID:                    95e39c81-16fd-40a4-a1d6-9ed4eb5cb5a9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-45d7p                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 etcd-ha-210800-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         24m
	  kube-system                 kindnet-57g8k                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      24m
	  kube-system                 kube-apiserver-ha-210800-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 kube-controller-manager-ha-210800-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 kube-proxy-rshfg                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 kube-scheduler-ha-210800-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	  kube-system                 kube-vip-ha-210800-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         24m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 24m                  kube-proxy       
	  Normal   Starting                 2m1s                 kube-proxy       
	  Normal   NodeHasSufficientMemory  24m (x8 over 24m)    kubelet          Node ha-210800-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    24m (x8 over 24m)    kubelet          Node ha-210800-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     24m (x7 over 24m)    kubelet          Node ha-210800-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  24m                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           24m                  node-controller  Node ha-210800-m02 event: Registered Node ha-210800-m02 in Controller
	  Normal   RegisteredNode           24m                  node-controller  Node ha-210800-m02 event: Registered Node ha-210800-m02 in Controller
	  Normal   RegisteredNode           20m                  node-controller  Node ha-210800-m02 event: Registered Node ha-210800-m02 in Controller
	  Normal   NodeNotReady             4m39s                node-controller  Node ha-210800-m02 status is now: NodeNotReady
	  Normal   Starting                 2m5s                 kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  2m5s (x2 over 2m5s)  kubelet          Node ha-210800-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m5s (x2 over 2m5s)  kubelet          Node ha-210800-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m5s (x2 over 2m5s)  kubelet          Node ha-210800-m02 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 2m5s                 kubelet          Node ha-210800-m02 has been rebooted, boot id: 95e39c81-16fd-40a4-a1d6-9ed4eb5cb5a9
	  Normal   NodeReady                2m5s                 kubelet          Node ha-210800-m02 status is now: NodeReady
	  Normal   NodeAllocatableEnforced  2m5s                 kubelet          Updated Node Allocatable limit across pods
	
	
	Name:               ha-210800-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-210800-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a2bee053733709aad5480b65159f65519e411d9f
	                    minikube.k8s.io/name=ha-210800
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_07T18_41_38_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 07 May 2024 18:41:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-210800-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 07 May 2024 19:02:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 07 May 2024 18:57:52 +0000   Tue, 07 May 2024 18:41:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 07 May 2024 18:57:52 +0000   Tue, 07 May 2024 18:41:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 07 May 2024 18:57:52 +0000   Tue, 07 May 2024 18:41:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 07 May 2024 18:57:52 +0000   Tue, 07 May 2024 18:41:47 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.137.224
	  Hostname:    ha-210800-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 d50db00b4138473185a198a540b0b97e
	  System UUID:                dac136fa-cba9-624b-b4aa-a625b5da5027
	  Boot ID:                    55352b6c-080b-4436-a6af-1832e99644a9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-5z998                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 etcd-ha-210800-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         21m
	  kube-system                 kindnet-6xzk7                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      21m
	  kube-system                 kube-apiserver-ha-210800-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-controller-manager-ha-210800-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-proxy-tnxck                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-scheduler-ha-210800-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	  kube-system                 kube-vip-ha-210800-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 21m                kube-proxy       
	  Normal  NodeHasSufficientMemory  21m (x8 over 21m)  kubelet          Node ha-210800-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21m (x8 over 21m)  kubelet          Node ha-210800-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21m (x7 over 21m)  kubelet          Node ha-210800-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           21m                node-controller  Node ha-210800-m03 event: Registered Node ha-210800-m03 in Controller
	  Normal  RegisteredNode           21m                node-controller  Node ha-210800-m03 event: Registered Node ha-210800-m03 in Controller
	  Normal  RegisteredNode           20m                node-controller  Node ha-210800-m03 event: Registered Node ha-210800-m03 in Controller
	
	
	Name:               ha-210800-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-210800-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a2bee053733709aad5480b65159f65519e411d9f
	                    minikube.k8s.io/name=ha-210800
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_07T18_46_23_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 07 May 2024 18:46:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-210800-m04
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 07 May 2024 19:02:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 07 May 2024 19:02:11 +0000   Tue, 07 May 2024 18:46:22 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 07 May 2024 19:02:11 +0000   Tue, 07 May 2024 18:46:22 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 07 May 2024 19:02:11 +0000   Tue, 07 May 2024 18:46:22 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 07 May 2024 19:02:11 +0000   Tue, 07 May 2024 18:46:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.129.171
	  Hostname:    ha-210800-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 74cf3740572a4e97a5fe8f02426d618e
	  System UUID:                da6a74a3-2818-1940-bc61-2e6972835dec
	  Boot ID:                    e0495729-f92f-4ab5-81e1-f9b1714ff037
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-trg6b       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16m
	  kube-system                 kube-proxy-255rr    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 16m                kube-proxy       
	  Normal  RegisteredNode           16m                node-controller  Node ha-210800-m04 event: Registered Node ha-210800-m04 in Controller
	  Normal  NodeHasSufficientMemory  16m (x2 over 16m)  kubelet          Node ha-210800-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x2 over 16m)  kubelet          Node ha-210800-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x2 over 16m)  kubelet          Node ha-210800-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           16m                node-controller  Node ha-210800-m04 event: Registered Node ha-210800-m04 in Controller
	  Normal  RegisteredNode           16m                node-controller  Node ha-210800-m04 event: Registered Node ha-210800-m04 in Controller
	  Normal  NodeReady                16m                kubelet          Node ha-210800-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +1.177492] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +7.071372] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[May 7 18:33] systemd-fstab-generator[645]: Ignoring "noauto" option for root device
	[  +0.163170] systemd-fstab-generator[657]: Ignoring "noauto" option for root device
	[ +28.479803] systemd-fstab-generator[946]: Ignoring "noauto" option for root device
	[  +0.094426] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.505844] systemd-fstab-generator[985]: Ignoring "noauto" option for root device
	[  +0.183097] systemd-fstab-generator[997]: Ignoring "noauto" option for root device
	[  +0.198618] systemd-fstab-generator[1011]: Ignoring "noauto" option for root device
	[May 7 18:34] systemd-fstab-generator[1183]: Ignoring "noauto" option for root device
	[  +0.178318] systemd-fstab-generator[1195]: Ignoring "noauto" option for root device
	[  +0.180207] systemd-fstab-generator[1207]: Ignoring "noauto" option for root device
	[  +0.256957] systemd-fstab-generator[1222]: Ignoring "noauto" option for root device
	[ +11.617075] systemd-fstab-generator[1316]: Ignoring "noauto" option for root device
	[  +0.084567] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.756347] systemd-fstab-generator[1519]: Ignoring "noauto" option for root device
	[  +5.583391] systemd-fstab-generator[1711]: Ignoring "noauto" option for root device
	[  +0.096605] kauditd_printk_skb: 73 callbacks suppressed
	[  +5.603939] kauditd_printk_skb: 67 callbacks suppressed
	[  +2.935224] systemd-fstab-generator[2196]: Ignoring "noauto" option for root device
	[ +15.022371] kauditd_printk_skb: 17 callbacks suppressed
	[  +6.256164] kauditd_printk_skb: 29 callbacks suppressed
	[May 7 18:38] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [4fc364eaa252] <==
	{"level":"warn","ts":"2024-05-07T19:02:47.648447Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"40991da5e0f0d8fd","rtt":"1.253902ms","error":"dial tcp 172.19.135.87:2380: i/o timeout"}
	{"level":"warn","ts":"2024-05-07T19:02:47.878188Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8e95dbab746ce898","from":"8e95dbab746ce898","remote-peer-id":"40991da5e0f0d8fd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-07T19:02:47.883511Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8e95dbab746ce898","from":"8e95dbab746ce898","remote-peer-id":"40991da5e0f0d8fd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-07T19:02:47.895122Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8e95dbab746ce898","from":"8e95dbab746ce898","remote-peer-id":"40991da5e0f0d8fd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-07T19:02:47.904224Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8e95dbab746ce898","from":"8e95dbab746ce898","remote-peer-id":"40991da5e0f0d8fd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-07T19:02:47.913685Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8e95dbab746ce898","from":"8e95dbab746ce898","remote-peer-id":"40991da5e0f0d8fd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-07T19:02:47.919007Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8e95dbab746ce898","from":"8e95dbab746ce898","remote-peer-id":"40991da5e0f0d8fd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-07T19:02:47.921458Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8e95dbab746ce898","from":"8e95dbab746ce898","remote-peer-id":"40991da5e0f0d8fd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-07T19:02:47.923939Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8e95dbab746ce898","from":"8e95dbab746ce898","remote-peer-id":"40991da5e0f0d8fd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-07T19:02:47.942805Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8e95dbab746ce898","from":"8e95dbab746ce898","remote-peer-id":"40991da5e0f0d8fd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-07T19:02:47.953084Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8e95dbab746ce898","from":"8e95dbab746ce898","remote-peer-id":"40991da5e0f0d8fd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-07T19:02:47.962098Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8e95dbab746ce898","from":"8e95dbab746ce898","remote-peer-id":"40991da5e0f0d8fd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-07T19:02:47.967677Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8e95dbab746ce898","from":"8e95dbab746ce898","remote-peer-id":"40991da5e0f0d8fd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-07T19:02:47.971454Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8e95dbab746ce898","from":"8e95dbab746ce898","remote-peer-id":"40991da5e0f0d8fd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-07T19:02:47.981332Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8e95dbab746ce898","from":"8e95dbab746ce898","remote-peer-id":"40991da5e0f0d8fd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-07T19:02:47.988422Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8e95dbab746ce898","from":"8e95dbab746ce898","remote-peer-id":"40991da5e0f0d8fd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-07T19:02:47.996178Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8e95dbab746ce898","from":"8e95dbab746ce898","remote-peer-id":"40991da5e0f0d8fd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-07T19:02:48.000641Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8e95dbab746ce898","from":"8e95dbab746ce898","remote-peer-id":"40991da5e0f0d8fd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-07T19:02:48.006085Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8e95dbab746ce898","from":"8e95dbab746ce898","remote-peer-id":"40991da5e0f0d8fd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-07T19:02:48.012952Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8e95dbab746ce898","from":"8e95dbab746ce898","remote-peer-id":"40991da5e0f0d8fd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-07T19:02:48.021281Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8e95dbab746ce898","from":"8e95dbab746ce898","remote-peer-id":"40991da5e0f0d8fd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-07T19:02:48.021497Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8e95dbab746ce898","from":"8e95dbab746ce898","remote-peer-id":"40991da5e0f0d8fd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-07T19:02:48.029639Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8e95dbab746ce898","from":"8e95dbab746ce898","remote-peer-id":"40991da5e0f0d8fd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-07T19:02:48.074347Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8e95dbab746ce898","from":"8e95dbab746ce898","remote-peer-id":"40991da5e0f0d8fd","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-05-07T19:02:48.076747Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"8e95dbab746ce898","from":"8e95dbab746ce898","remote-peer-id":"40991da5e0f0d8fd","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 19:02:48 up 30 min,  0 users,  load average: 0.36, 0.33, 0.39
	Linux ha-210800 5.10.207 #1 SMP Tue Apr 30 22:38:43 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [3dcbef7bd0b6] <==
	I0507 19:02:15.753247       1 main.go:250] Node ha-210800-m04 has CIDR [10.244.3.0/24] 
	I0507 19:02:25.769849       1 main.go:223] Handling node with IPs: map[172.19.132.69:{}]
	I0507 19:02:25.770066       1 main.go:227] handling current node
	I0507 19:02:25.770194       1 main.go:223] Handling node with IPs: map[172.19.143.44:{}]
	I0507 19:02:25.770283       1 main.go:250] Node ha-210800-m02 has CIDR [10.244.1.0/24] 
	I0507 19:02:25.770679       1 main.go:223] Handling node with IPs: map[172.19.137.224:{}]
	I0507 19:02:25.770787       1 main.go:250] Node ha-210800-m03 has CIDR [10.244.2.0/24] 
	I0507 19:02:25.771045       1 main.go:223] Handling node with IPs: map[172.19.129.171:{}]
	I0507 19:02:25.771115       1 main.go:250] Node ha-210800-m04 has CIDR [10.244.3.0/24] 
	I0507 19:02:35.779289       1 main.go:223] Handling node with IPs: map[172.19.132.69:{}]
	I0507 19:02:35.779397       1 main.go:227] handling current node
	I0507 19:02:35.779410       1 main.go:223] Handling node with IPs: map[172.19.143.44:{}]
	I0507 19:02:35.779419       1 main.go:250] Node ha-210800-m02 has CIDR [10.244.1.0/24] 
	I0507 19:02:35.779655       1 main.go:223] Handling node with IPs: map[172.19.137.224:{}]
	I0507 19:02:35.779844       1 main.go:250] Node ha-210800-m03 has CIDR [10.244.2.0/24] 
	I0507 19:02:35.779993       1 main.go:223] Handling node with IPs: map[172.19.129.171:{}]
	I0507 19:02:35.780164       1 main.go:250] Node ha-210800-m04 has CIDR [10.244.3.0/24] 
	I0507 19:02:45.792488       1 main.go:223] Handling node with IPs: map[172.19.132.69:{}]
	I0507 19:02:45.792767       1 main.go:227] handling current node
	I0507 19:02:45.792837       1 main.go:223] Handling node with IPs: map[172.19.143.44:{}]
	I0507 19:02:45.792946       1 main.go:250] Node ha-210800-m02 has CIDR [10.244.1.0/24] 
	I0507 19:02:45.793078       1 main.go:223] Handling node with IPs: map[172.19.137.224:{}]
	I0507 19:02:45.793101       1 main.go:250] Node ha-210800-m03 has CIDR [10.244.2.0/24] 
	I0507 19:02:45.793160       1 main.go:223] Handling node with IPs: map[172.19.129.171:{}]
	I0507 19:02:45.793232       1 main.go:250] Node ha-210800-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [c22f717c4b95] <==
	E0507 18:42:37.869061       1 conn.go:339] Error on socket receive: read tcp 172.19.143.254:8443->172.19.128.1:51296: use of closed network connection
	E0507 18:42:38.286676       1 conn.go:339] Error on socket receive: read tcp 172.19.143.254:8443->172.19.128.1:51298: use of closed network connection
	E0507 18:42:38.697286       1 conn.go:339] Error on socket receive: read tcp 172.19.143.254:8443->172.19.128.1:51300: use of closed network connection
	E0507 18:42:39.121239       1 conn.go:339] Error on socket receive: read tcp 172.19.143.254:8443->172.19.128.1:51302: use of closed network connection
	E0507 18:42:39.567325       1 conn.go:339] Error on socket receive: read tcp 172.19.143.254:8443->172.19.128.1:51304: use of closed network connection
	E0507 18:42:40.315976       1 conn.go:339] Error on socket receive: read tcp 172.19.143.254:8443->172.19.128.1:51307: use of closed network connection
	E0507 18:42:50.736641       1 conn.go:339] Error on socket receive: read tcp 172.19.143.254:8443->172.19.128.1:51309: use of closed network connection
	E0507 18:42:51.150409       1 conn.go:339] Error on socket receive: read tcp 172.19.143.254:8443->172.19.128.1:51314: use of closed network connection
	E0507 18:43:01.578109       1 conn.go:339] Error on socket receive: read tcp 172.19.143.254:8443->172.19.128.1:51316: use of closed network connection
	E0507 18:43:01.994136       1 conn.go:339] Error on socket receive: read tcp 172.19.143.254:8443->172.19.128.1:51318: use of closed network connection
	E0507 18:43:12.416957       1 conn.go:339] Error on socket receive: read tcp 172.19.143.254:8443->172.19.128.1:51320: use of closed network connection
	W0507 18:57:40.239617       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.19.132.69 172.19.137.224]
	I0507 18:57:53.340316       1 trace.go:236] Trace[1959043440]: "Get" accept:application/json, */*,audit-id:6d257de4-d363-4bad-b4be-45d0292ea933,client:127.0.0.1,api-group:coordination.k8s.io,api-version:v1,name:plndr-cp-lock,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/plndr-cp-lock,user-agent:kube-vip/v0.0.0 (linux/amd64) kubernetes/$Format,verb:GET (07-May-2024 18:57:52.697) (total time: 642ms):
	Trace[1959043440]: ---"About to write a response" 642ms (18:57:53.340)
	Trace[1959043440]: [642.355939ms] [642.355939ms] END
	I0507 18:57:53.340800       1 trace.go:236] Trace[1939201477]: "Patch" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:ca6bbab4-26af-4e9e-b3e5-418faf31c082,client:172.19.137.224,api-group:,api-version:v1,name:ha-210800-m03,subresource:status,namespace:,protocol:HTTP/2.0,resource:nodes,scope:resource,url:/api/v1/nodes/ha-210800-m03/status,user-agent:kubelet/v1.30.0 (linux/amd64) kubernetes/7c48c2b,verb:PATCH (07-May-2024 18:57:52.755) (total time: 585ms):
	Trace[1939201477]: ["GuaranteedUpdate etcd3" audit-id:ca6bbab4-26af-4e9e-b3e5-418faf31c082,key:/minions/ha-210800-m03,type:*core.Node,resource:nodes 585ms (18:57:52.755)
	Trace[1939201477]:  ---"Txn call completed" 580ms (18:57:53.338)]
	Trace[1939201477]: ---"Object stored in database" 580ms (18:57:53.338)
	Trace[1939201477]: [585.563861ms] [585.563861ms] END
	I0507 18:58:00.835017       1 trace.go:236] Trace[690839486]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/172.19.132.69,type:*v1.Endpoints,resource:apiServerIPInfo (07-May-2024 18:58:00.218) (total time: 616ms):
	Trace[690839486]: ---"initial value restored" 261ms (18:58:00.480)
	Trace[690839486]: ---"Transaction prepared" 166ms (18:58:00.646)
	Trace[690839486]: ---"Txn call completed" 188ms (18:58:00.834)
	Trace[690839486]: [616.58337ms] [616.58337ms] END
	
	
	==> kube-controller-manager [cf981f1729cd] <==
	I0507 18:42:29.542409       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="183.821592ms"
	I0507 18:42:29.772737       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="230.124183ms"
	I0507 18:42:29.804774       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31.984662ms"
	I0507 18:42:29.805114       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="100.212µs"
	I0507 18:42:31.350995       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="139.11µs"
	I0507 18:42:31.755560       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="46.703µs"
	I0507 18:42:31.902466       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="222.415µs"
	I0507 18:42:32.393154       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="23.043827ms"
	I0507 18:42:32.393309       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="52.004µs"
	I0507 18:42:32.542787       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="63.518386ms"
	I0507 18:42:32.564643       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="21.796839ms"
	I0507 18:42:32.564779       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="83.606µs"
	E0507 18:46:22.225002       1 certificate_controller.go:146] Sync csr-g7vfv failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-g7vfv": the object has been modified; please apply your changes to the latest version and try again
	E0507 18:46:22.265835       1 certificate_controller.go:146] Sync csr-g7vfv failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-g7vfv": the object has been modified; please apply your changes to the latest version and try again
	I0507 18:46:22.340969       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-210800-m04\" does not exist"
	I0507 18:46:22.397972       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-210800-m04" podCIDRs=["10.244.3.0/24"]
	I0507 18:46:24.310178       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-210800-m04"
	I0507 18:46:42.645047       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-210800-m04"
	I0507 18:58:08.711200       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-210800-m04"
	I0507 18:58:08.770499       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="24.030643ms"
	I0507 18:58:08.771071       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="95.405µs"
	I0507 19:00:42.616899       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-210800-m04"
	I0507 19:00:43.441015       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="54.003µs"
	I0507 19:00:47.316240       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.852318ms"
	I0507 19:00:47.316456       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.602µs"
	
	
	==> kube-proxy [b876902be49e] <==
	I0507 18:34:46.534436       1 server_linux.go:69] "Using iptables proxy"
	I0507 18:34:46.610982       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.19.132.69"]
	I0507 18:34:46.662572       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0507 18:34:46.662679       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0507 18:34:46.662726       1 server_linux.go:165] "Using iptables Proxier"
	I0507 18:34:46.666466       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0507 18:34:46.667450       1 server.go:872] "Version info" version="v1.30.0"
	I0507 18:34:46.667762       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0507 18:34:46.670202       1 config.go:192] "Starting service config controller"
	I0507 18:34:46.670945       1 config.go:101] "Starting endpoint slice config controller"
	I0507 18:34:46.671296       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0507 18:34:46.672219       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0507 18:34:46.675362       1 config.go:319] "Starting node config controller"
	I0507 18:34:46.676170       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0507 18:34:46.773861       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0507 18:34:46.773924       1 shared_informer.go:320] Caches are synced for service config
	I0507 18:34:46.776515       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [74353e51a687] <==
	W0507 18:34:29.201175       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0507 18:34:29.201212       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0507 18:34:29.382625       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0507 18:34:29.382869       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0507 18:34:29.386321       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0507 18:34:29.386358       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0507 18:34:29.451226       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0507 18:34:29.452168       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0507 18:34:29.452040       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0507 18:34:29.452756       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0507 18:34:29.462385       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0507 18:34:29.463415       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0507 18:34:31.638259       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0507 18:42:29.271983       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-45d7p\": pod busybox-fc5497c4f-45d7p is already assigned to node \"ha-210800-m02\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-45d7p" node="ha-210800-m02"
	E0507 18:42:29.272368       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod c4b0b74b-2782-4a8c-9ccb-822e2beb946e(default/busybox-fc5497c4f-45d7p) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-45d7p"
	E0507 18:42:29.272726       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-45d7p\": pod busybox-fc5497c4f-45d7p is already assigned to node \"ha-210800-m02\"" pod="default/busybox-fc5497c4f-45d7p"
	I0507 18:42:29.272925       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-45d7p" node="ha-210800-m02"
	E0507 18:46:22.642557       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-fd2tw\": pod kindnet-fd2tw is already assigned to node \"ha-210800-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-fd2tw" node="ha-210800-m04"
	E0507 18:46:22.642623       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 2b82d278-7c13-49ea-b769-6083adc2b8cc(kube-system/kindnet-fd2tw) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-fd2tw"
	E0507 18:46:22.642642       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-fd2tw\": pod kindnet-fd2tw is already assigned to node \"ha-210800-m04\"" pod="kube-system/kindnet-fd2tw"
	I0507 18:46:22.642863       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-fd2tw" node="ha-210800-m04"
	E0507 18:46:22.643747       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-d5dfm\": pod kube-proxy-d5dfm is already assigned to node \"ha-210800-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-d5dfm" node="ha-210800-m04"
	E0507 18:46:22.643795       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 600ff515-8bc8-46d5-8709-833f6f8bd0d0(kube-system/kube-proxy-d5dfm) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-d5dfm"
	E0507 18:46:22.643809       1 schedule_one.go:1048] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-d5dfm\": pod kube-proxy-d5dfm is already assigned to node \"ha-210800-m04\"" pod="kube-system/kube-proxy-d5dfm"
	I0507 18:46:22.643824       1 schedule_one.go:1061] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-d5dfm" node="ha-210800-m04"
	
	
	==> kubelet <==
	May 07 18:58:31 ha-210800 kubelet[2203]: E0507 18:58:31.356289    2203 iptables.go:577] "Could not set up iptables canary" err=<
	May 07 18:58:31 ha-210800 kubelet[2203]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 07 18:58:31 ha-210800 kubelet[2203]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 07 18:58:31 ha-210800 kubelet[2203]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 07 18:58:31 ha-210800 kubelet[2203]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 07 18:59:31 ha-210800 kubelet[2203]: E0507 18:59:31.356330    2203 iptables.go:577] "Could not set up iptables canary" err=<
	May 07 18:59:31 ha-210800 kubelet[2203]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 07 18:59:31 ha-210800 kubelet[2203]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 07 18:59:31 ha-210800 kubelet[2203]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 07 18:59:31 ha-210800 kubelet[2203]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 07 19:00:31 ha-210800 kubelet[2203]: E0507 19:00:31.357691    2203 iptables.go:577] "Could not set up iptables canary" err=<
	May 07 19:00:31 ha-210800 kubelet[2203]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 07 19:00:31 ha-210800 kubelet[2203]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 07 19:00:31 ha-210800 kubelet[2203]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 07 19:00:31 ha-210800 kubelet[2203]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 07 19:01:31 ha-210800 kubelet[2203]: E0507 19:01:31.356825    2203 iptables.go:577] "Could not set up iptables canary" err=<
	May 07 19:01:31 ha-210800 kubelet[2203]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 07 19:01:31 ha-210800 kubelet[2203]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 07 19:01:31 ha-210800 kubelet[2203]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 07 19:01:31 ha-210800 kubelet[2203]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 07 19:02:31 ha-210800 kubelet[2203]: E0507 19:02:31.355415    2203 iptables.go:577] "Could not set up iptables canary" err=<
	May 07 19:02:31 ha-210800 kubelet[2203]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 07 19:02:31 ha-210800 kubelet[2203]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 07 19:02:31 ha-210800 kubelet[2203]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 07 19:02:31 ha-210800 kubelet[2203]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0507 19:02:40.536667   11704 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-210800 -n ha-210800
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-210800 -n ha-210800: (10.7794462s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-210800 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartSecondaryNode (257.13s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (51.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-600000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-600000 -- exec busybox-fc5497c4f-cpw2r -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-600000 -- exec busybox-fc5497c4f-cpw2r -- sh -c "ping -c 1 172.19.128.1"
multinode_test.go:583: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-600000 -- exec busybox-fc5497c4f-cpw2r -- sh -c "ping -c 1 172.19.128.1": exit status 1 (10.3977222s)

                                                
                                                
-- stdout --
	PING 172.19.128.1 (172.19.128.1): 56 data bytes
	
	--- 172.19.128.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0507 19:37:31.516666    1812 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:584: Failed to ping host (172.19.128.1) from pod (busybox-fc5497c4f-cpw2r): exit status 1
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-600000 -- exec busybox-fc5497c4f-gcqlv -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-600000 -- exec busybox-fc5497c4f-gcqlv -- sh -c "ping -c 1 172.19.128.1"
multinode_test.go:583: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-600000 -- exec busybox-fc5497c4f-gcqlv -- sh -c "ping -c 1 172.19.128.1": exit status 1 (10.4051087s)

                                                
                                                
-- stdout --
	PING 172.19.128.1 (172.19.128.1): 56 data bytes
	
	--- 172.19.128.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	W0507 19:37:42.337771   10128 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:584: Failed to ping host (172.19.128.1) from pod (busybox-fc5497c4f-gcqlv): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-600000 -n multinode-600000
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-600000 -n multinode-600000: (10.6079023s)
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-600000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-600000 logs -n 25: (7.3904408s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | mount-start-2-200900 ssh -- ls                    | mount-start-2-200900 | minikube5\jenkins | v1.33.0 | 07 May 24 19:27 UTC | 07 May 24 19:27 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-1-200900                           | mount-start-1-200900 | minikube5\jenkins | v1.33.0 | 07 May 24 19:27 UTC | 07 May 24 19:27 UTC |
	|         | --alsologtostderr -v=5                            |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-200900 ssh -- ls                    | mount-start-2-200900 | minikube5\jenkins | v1.33.0 | 07 May 24 19:27 UTC | 07 May 24 19:28 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| stop    | -p mount-start-2-200900                           | mount-start-2-200900 | minikube5\jenkins | v1.33.0 | 07 May 24 19:28 UTC | 07 May 24 19:28 UTC |
	| start   | -p mount-start-2-200900                           | mount-start-2-200900 | minikube5\jenkins | v1.33.0 | 07 May 24 19:28 UTC | 07 May 24 19:30 UTC |
	| mount   | C:\Users\jenkins.minikube5:/minikube-host         | mount-start-2-200900 | minikube5\jenkins | v1.33.0 | 07 May 24 19:30 UTC |                     |
	|         | --profile mount-start-2-200900 --v 0              |                      |                   |         |                     |                     |
	|         | --9p-version 9p2000.L --gid 0 --ip                |                      |                   |         |                     |                     |
	|         | --msize 6543 --port 46465 --type 9p --uid         |                      |                   |         |                     |                     |
	|         |                                                 0 |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-200900 ssh -- ls                    | mount-start-2-200900 | minikube5\jenkins | v1.33.0 | 07 May 24 19:30 UTC | 07 May 24 19:30 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-2-200900                           | mount-start-2-200900 | minikube5\jenkins | v1.33.0 | 07 May 24 19:30 UTC | 07 May 24 19:30 UTC |
	| delete  | -p mount-start-1-200900                           | mount-start-1-200900 | minikube5\jenkins | v1.33.0 | 07 May 24 19:30 UTC | 07 May 24 19:30 UTC |
	| start   | -p multinode-600000                               | multinode-600000     | minikube5\jenkins | v1.33.0 | 07 May 24 19:30 UTC | 07 May 24 19:37 UTC |
	|         | --wait=true --memory=2200                         |                      |                   |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                 |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                   |                      |                   |         |                     |                     |
	| kubectl | -p multinode-600000 -- apply -f                   | multinode-600000     | minikube5\jenkins | v1.33.0 | 07 May 24 19:37 UTC | 07 May 24 19:37 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |                   |         |                     |                     |
	| kubectl | -p multinode-600000 -- rollout                    | multinode-600000     | minikube5\jenkins | v1.33.0 | 07 May 24 19:37 UTC | 07 May 24 19:37 UTC |
	|         | status deployment/busybox                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-600000 -- get pods -o                | multinode-600000     | minikube5\jenkins | v1.33.0 | 07 May 24 19:37 UTC | 07 May 24 19:37 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-600000 -- get pods -o                | multinode-600000     | minikube5\jenkins | v1.33.0 | 07 May 24 19:37 UTC | 07 May 24 19:37 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-600000 -- exec                       | multinode-600000     | minikube5\jenkins | v1.33.0 | 07 May 24 19:37 UTC | 07 May 24 19:37 UTC |
	|         | busybox-fc5497c4f-cpw2r --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-600000 -- exec                       | multinode-600000     | minikube5\jenkins | v1.33.0 | 07 May 24 19:37 UTC | 07 May 24 19:37 UTC |
	|         | busybox-fc5497c4f-gcqlv --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-600000 -- exec                       | multinode-600000     | minikube5\jenkins | v1.33.0 | 07 May 24 19:37 UTC | 07 May 24 19:37 UTC |
	|         | busybox-fc5497c4f-cpw2r --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-600000 -- exec                       | multinode-600000     | minikube5\jenkins | v1.33.0 | 07 May 24 19:37 UTC | 07 May 24 19:37 UTC |
	|         | busybox-fc5497c4f-gcqlv --                        |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-600000 -- exec                       | multinode-600000     | minikube5\jenkins | v1.33.0 | 07 May 24 19:37 UTC | 07 May 24 19:37 UTC |
	|         | busybox-fc5497c4f-cpw2r -- nslookup               |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-600000 -- exec                       | multinode-600000     | minikube5\jenkins | v1.33.0 | 07 May 24 19:37 UTC | 07 May 24 19:37 UTC |
	|         | busybox-fc5497c4f-gcqlv -- nslookup               |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-600000 -- get pods -o                | multinode-600000     | minikube5\jenkins | v1.33.0 | 07 May 24 19:37 UTC | 07 May 24 19:37 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-600000 -- exec                       | multinode-600000     | minikube5\jenkins | v1.33.0 | 07 May 24 19:37 UTC | 07 May 24 19:37 UTC |
	|         | busybox-fc5497c4f-cpw2r                           |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-600000 -- exec                       | multinode-600000     | minikube5\jenkins | v1.33.0 | 07 May 24 19:37 UTC |                     |
	|         | busybox-fc5497c4f-cpw2r -- sh                     |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.19.128.1                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-600000 -- exec                       | multinode-600000     | minikube5\jenkins | v1.33.0 | 07 May 24 19:37 UTC | 07 May 24 19:37 UTC |
	|         | busybox-fc5497c4f-gcqlv                           |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-600000 -- exec                       | multinode-600000     | minikube5\jenkins | v1.33.0 | 07 May 24 19:37 UTC |                     |
	|         | busybox-fc5497c4f-gcqlv -- sh                     |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.19.128.1                         |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/07 19:30:56
	Running on machine: minikube5
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0507 19:30:56.047611    6544 out.go:291] Setting OutFile to fd 1008 ...
	I0507 19:30:56.048232    6544 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 19:30:56.048232    6544 out.go:304] Setting ErrFile to fd 756...
	I0507 19:30:56.048232    6544 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 19:30:56.067440    6544 out.go:298] Setting JSON to false
	I0507 19:30:56.070274    6544 start.go:129] hostinfo: {"hostname":"minikube5","uptime":26173,"bootTime":1715084082,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0507 19:30:56.070334    6544 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0507 19:30:56.078965    6544 out.go:177] * [multinode-600000] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0507 19:30:56.083316    6544 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0507 19:30:56.082726    6544 notify.go:220] Checking for updates...
	I0507 19:30:56.085625    6544 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0507 19:30:56.088275    6544 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0507 19:30:56.090974    6544 out.go:177]   - MINIKUBE_LOCATION=18804
	I0507 19:30:56.093008    6544 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0507 19:30:56.096976    6544 config.go:182] Loaded profile config "ha-210800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 19:30:56.096976    6544 driver.go:392] Setting default libvirt URI to qemu:///system
	I0507 19:31:00.820202    6544 out.go:177] * Using the hyperv driver based on user configuration
	I0507 19:31:00.823282    6544 start.go:297] selected driver: hyperv
	I0507 19:31:00.823282    6544 start.go:901] validating driver "hyperv" against <nil>
	I0507 19:31:00.823814    6544 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0507 19:31:00.864470    6544 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0507 19:31:00.865754    6544 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0507 19:31:00.865754    6544 cni.go:84] Creating CNI manager for ""
	I0507 19:31:00.865754    6544 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0507 19:31:00.865754    6544 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0507 19:31:00.865754    6544 start.go:340] cluster config:
	{Name:multinode-600000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-600000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0507 19:31:00.865754    6544 iso.go:125] acquiring lock: {Name:mk4977609d05da04fcecf95837b3381fb1950afd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0507 19:31:00.871265    6544 out.go:177] * Starting "multinode-600000" primary control-plane node in "multinode-600000" cluster
	I0507 19:31:00.873094    6544 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0507 19:31:00.874071    6544 preload.go:147] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0507 19:31:00.874071    6544 cache.go:56] Caching tarball of preloaded images
	I0507 19:31:00.874071    6544 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0507 19:31:00.874071    6544 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0507 19:31:00.874071    6544 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-600000\config.json ...
	I0507 19:31:00.874071    6544 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-600000\config.json: {Name:mk8d038dd79475f1d720f120ae1c51ef98bd6b70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 19:31:00.875688    6544 start.go:360] acquireMachinesLock for multinode-600000: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 19:31:00.875875    6544 start.go:364] duration metric: took 187µs to acquireMachinesLock for "multinode-600000"
	I0507 19:31:00.875875    6544 start.go:93] Provisioning new machine with config: &{Name:multinode-600000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0 ClusterName:multinode-600000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0507 19:31:00.875875    6544 start.go:125] createHost starting for "" (driver="hyperv")
	I0507 19:31:00.879084    6544 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0507 19:31:00.879084    6544 start.go:159] libmachine.API.Create for "multinode-600000" (driver="hyperv")
	I0507 19:31:00.879084    6544 client.go:168] LocalClient.Create starting
	I0507 19:31:00.880091    6544 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0507 19:31:00.880091    6544 main.go:141] libmachine: Decoding PEM data...
	I0507 19:31:00.880091    6544 main.go:141] libmachine: Parsing certificate...
	I0507 19:31:00.880091    6544 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0507 19:31:00.880091    6544 main.go:141] libmachine: Decoding PEM data...
	I0507 19:31:00.880091    6544 main.go:141] libmachine: Parsing certificate...
	I0507 19:31:00.880091    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0507 19:31:02.794948    6544 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0507 19:31:02.795016    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:31:02.795016    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0507 19:31:04.374844    6544 main.go:141] libmachine: [stdout =====>] : False
	
	I0507 19:31:04.374844    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:31:04.374992    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0507 19:31:05.738062    6544 main.go:141] libmachine: [stdout =====>] : True
	
	I0507 19:31:05.738062    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:31:05.738244    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0507 19:31:09.077673    6544 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0507 19:31:09.077673    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:31:09.079697    6544 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso...
	I0507 19:31:09.408704    6544 main.go:141] libmachine: Creating SSH key...
	I0507 19:31:09.648514    6544 main.go:141] libmachine: Creating VM...
	I0507 19:31:09.648514    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0507 19:31:12.273144    6544 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0507 19:31:12.273339    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:31:12.273400    6544 main.go:141] libmachine: Using switch "Default Switch"
	I0507 19:31:12.273400    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0507 19:31:13.847478    6544 main.go:141] libmachine: [stdout =====>] : True
	
	I0507 19:31:13.847478    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:31:13.847478    6544 main.go:141] libmachine: Creating VHD
	I0507 19:31:13.847478    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-600000\fixed.vhd' -SizeBytes 10MB -Fixed
	I0507 19:31:17.316896    6544 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-600000\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 29E17AF4-5C49-49DF-AF52-C73B3C5CD438
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0507 19:31:17.317235    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:31:17.317235    6544 main.go:141] libmachine: Writing magic tar header
	I0507 19:31:17.317235    6544 main.go:141] libmachine: Writing SSH key tar header
	I0507 19:31:17.326415    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-600000\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-600000\disk.vhd' -VHDType Dynamic -DeleteSource
	I0507 19:31:20.294426    6544 main.go:141] libmachine: [stdout =====>] : 
	I0507 19:31:20.294505    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:31:20.294693    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-600000\disk.vhd' -SizeBytes 20000MB
	I0507 19:31:22.653444    6544 main.go:141] libmachine: [stdout =====>] : 
	I0507 19:31:22.653979    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:31:22.653979    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-600000 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-600000' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0507 19:31:25.975713    6544 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-600000 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0507 19:31:25.975713    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:31:25.976402    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-600000 -DynamicMemoryEnabled $false
	I0507 19:31:28.045098    6544 main.go:141] libmachine: [stdout =====>] : 
	I0507 19:31:28.045358    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:31:28.045358    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-600000 -Count 2
	I0507 19:31:29.964439    6544 main.go:141] libmachine: [stdout =====>] : 
	I0507 19:31:29.964526    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:31:29.964526    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-600000 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-600000\boot2docker.iso'
	I0507 19:31:32.217855    6544 main.go:141] libmachine: [stdout =====>] : 
	I0507 19:31:32.218804    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:31:32.218804    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-600000 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-600000\disk.vhd'
	I0507 19:31:34.547637    6544 main.go:141] libmachine: [stdout =====>] : 
	I0507 19:31:34.547637    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:31:34.547637    6544 main.go:141] libmachine: Starting VM...
	I0507 19:31:34.547637    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-600000
	I0507 19:31:37.289456    6544 main.go:141] libmachine: [stdout =====>] : 
	I0507 19:31:37.290390    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:31:37.290441    6544 main.go:141] libmachine: Waiting for host to start...
	I0507 19:31:37.290441    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000 ).state
	I0507 19:31:39.287316    6544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:31:39.287316    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:31:39.287738    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000 ).networkadapters[0]).ipaddresses[0]
	I0507 19:31:41.516973    6544 main.go:141] libmachine: [stdout =====>] : 
	I0507 19:31:41.516973    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:31:42.526609    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000 ).state
	I0507 19:31:44.483763    6544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:31:44.483825    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:31:44.483993    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000 ).networkadapters[0]).ipaddresses[0]
	I0507 19:31:46.733267    6544 main.go:141] libmachine: [stdout =====>] : 
	I0507 19:31:46.733722    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:31:47.746305    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000 ).state
	I0507 19:31:49.687406    6544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:31:49.687406    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:31:49.688300    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000 ).networkadapters[0]).ipaddresses[0]
	I0507 19:31:51.923308    6544 main.go:141] libmachine: [stdout =====>] : 
	I0507 19:31:51.923308    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:31:52.931726    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000 ).state
	I0507 19:31:54.913993    6544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:31:54.913993    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:31:54.914830    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000 ).networkadapters[0]).ipaddresses[0]
	I0507 19:31:57.201339    6544 main.go:141] libmachine: [stdout =====>] : 
	I0507 19:31:57.201339    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:31:58.210589    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000 ).state
	I0507 19:32:00.200591    6544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:32:00.200591    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:32:00.201445    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000 ).networkadapters[0]).ipaddresses[0]
	I0507 19:32:02.587390    6544 main.go:141] libmachine: [stdout =====>] : 172.19.143.74
	
	I0507 19:32:02.587390    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:32:02.588486    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000 ).state
	I0507 19:32:04.480293    6544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:32:04.480493    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:32:04.480493    6544 machine.go:94] provisionDockerMachine start ...
	I0507 19:32:04.480661    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000 ).state
	I0507 19:32:06.407594    6544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:32:06.407594    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:32:06.408276    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000 ).networkadapters[0]).ipaddresses[0]
	I0507 19:32:08.627291    6544 main.go:141] libmachine: [stdout =====>] : 172.19.143.74
	
	I0507 19:32:08.627291    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:32:08.632373    6544 main.go:141] libmachine: Using SSH client type: native
	I0507 19:32:08.641831    6544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.143.74 22 <nil> <nil>}
	I0507 19:32:08.641831    6544 main.go:141] libmachine: About to run SSH command:
	hostname
	I0507 19:32:08.791717    6544 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0507 19:32:08.791717    6544 buildroot.go:166] provisioning hostname "multinode-600000"
	I0507 19:32:08.791813    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000 ).state
	I0507 19:32:10.697939    6544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:32:10.697939    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:32:10.698011    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000 ).networkadapters[0]).ipaddresses[0]
	I0507 19:32:12.981594    6544 main.go:141] libmachine: [stdout =====>] : 172.19.143.74
	
	I0507 19:32:12.981594    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:32:12.985658    6544 main.go:141] libmachine: Using SSH client type: native
	I0507 19:32:12.985754    6544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.143.74 22 <nil> <nil>}
	I0507 19:32:12.985754    6544 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-600000 && echo "multinode-600000" | sudo tee /etc/hostname
	I0507 19:32:13.155075    6544 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-600000
	
	I0507 19:32:13.155150    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000 ).state
	I0507 19:32:15.042815    6544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:32:15.042815    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:32:15.043332    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000 ).networkadapters[0]).ipaddresses[0]
	I0507 19:32:17.295160    6544 main.go:141] libmachine: [stdout =====>] : 172.19.143.74
	
	I0507 19:32:17.295160    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:32:17.299027    6544 main.go:141] libmachine: Using SSH client type: native
	I0507 19:32:17.299641    6544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.143.74 22 <nil> <nil>}
	I0507 19:32:17.299719    6544 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-600000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-600000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-600000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0507 19:32:17.450130    6544 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0507 19:32:17.450130    6544 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0507 19:32:17.450219    6544 buildroot.go:174] setting up certificates
	I0507 19:32:17.450262    6544 provision.go:84] configureAuth start
	I0507 19:32:17.450262    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000 ).state
	I0507 19:32:19.320915    6544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:32:19.320984    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:32:19.321090    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000 ).networkadapters[0]).ipaddresses[0]
	I0507 19:32:21.568904    6544 main.go:141] libmachine: [stdout =====>] : 172.19.143.74
	
	I0507 19:32:21.568904    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:32:21.568904    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000 ).state
	I0507 19:32:23.423921    6544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:32:23.423921    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:32:23.423921    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000 ).networkadapters[0]).ipaddresses[0]
	I0507 19:32:25.683608    6544 main.go:141] libmachine: [stdout =====>] : 172.19.143.74
	
	I0507 19:32:25.684601    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:32:25.684601    6544 provision.go:143] copyHostCerts
	I0507 19:32:25.684699    6544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0507 19:32:25.684699    6544 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0507 19:32:25.684699    6544 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0507 19:32:25.685418    6544 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0507 19:32:25.686198    6544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0507 19:32:25.686302    6544 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0507 19:32:25.686302    6544 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0507 19:32:25.686302    6544 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0507 19:32:25.687334    6544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0507 19:32:25.687334    6544 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0507 19:32:25.687334    6544 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0507 19:32:25.687945    6544 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0507 19:32:25.688767    6544 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-600000 san=[127.0.0.1 172.19.143.74 localhost minikube multinode-600000]
	I0507 19:32:25.782503    6544 provision.go:177] copyRemoteCerts
	I0507 19:32:25.790494    6544 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0507 19:32:25.790494    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000 ).state
	I0507 19:32:27.665013    6544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:32:27.665013    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:32:27.665149    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000 ).networkadapters[0]).ipaddresses[0]
	I0507 19:32:29.919161    6544 main.go:141] libmachine: [stdout =====>] : 172.19.143.74
	
	I0507 19:32:29.920202    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:32:29.920425    6544 sshutil.go:53] new ssh client: &{IP:172.19.143.74 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-600000\id_rsa Username:docker}
	I0507 19:32:30.024699    6544 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.2339188s)
	I0507 19:32:30.024699    6544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0507 19:32:30.025340    6544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0507 19:32:30.066822    6544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0507 19:32:30.066958    6544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0507 19:32:30.107674    6544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0507 19:32:30.108606    6544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0507 19:32:30.151214    6544 provision.go:87] duration metric: took 12.700096s to configureAuth
	I0507 19:32:30.151214    6544 buildroot.go:189] setting minikube options for container-runtime
	I0507 19:32:30.151844    6544 config.go:182] Loaded profile config "multinode-600000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 19:32:30.151844    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000 ).state
	I0507 19:32:32.037940    6544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:32:32.037940    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:32:32.038878    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000 ).networkadapters[0]).ipaddresses[0]
	I0507 19:32:34.272814    6544 main.go:141] libmachine: [stdout =====>] : 172.19.143.74
	
	I0507 19:32:34.272814    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:32:34.278861    6544 main.go:141] libmachine: Using SSH client type: native
	I0507 19:32:34.279411    6544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.143.74 22 <nil> <nil>}
	I0507 19:32:34.279411    6544 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0507 19:32:34.418328    6544 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0507 19:32:34.418378    6544 buildroot.go:70] root file system type: tmpfs
	I0507 19:32:34.418596    6544 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0507 19:32:34.418653    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000 ).state
	I0507 19:32:36.275791    6544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:32:36.275791    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:32:36.275791    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000 ).networkadapters[0]).ipaddresses[0]
	I0507 19:32:38.468068    6544 main.go:141] libmachine: [stdout =====>] : 172.19.143.74
	
	I0507 19:32:38.468068    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:32:38.472104    6544 main.go:141] libmachine: Using SSH client type: native
	I0507 19:32:38.472512    6544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.143.74 22 <nil> <nil>}
	I0507 19:32:38.472614    6544 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0507 19:32:38.641482    6544 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0507 19:32:38.641675    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000 ).state
	I0507 19:32:40.506510    6544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:32:40.506510    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:32:40.507541    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000 ).networkadapters[0]).ipaddresses[0]
	I0507 19:32:42.742911    6544 main.go:141] libmachine: [stdout =====>] : 172.19.143.74
	
	I0507 19:32:42.742911    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:32:42.749593    6544 main.go:141] libmachine: Using SSH client type: native
	I0507 19:32:42.750127    6544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.143.74 22 <nil> <nil>}
	I0507 19:32:42.750229    6544 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0507 19:32:44.808909    6544 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0507 19:32:44.809014    6544 machine.go:97] duration metric: took 40.3257239s to provisionDockerMachine
	I0507 19:32:44.809014    6544 client.go:171] duration metric: took 1m43.9229s to LocalClient.Create
	I0507 19:32:44.809133    6544 start.go:167] duration metric: took 1m43.923019s to libmachine.API.Create "multinode-600000"
	I0507 19:32:44.809133    6544 start.go:293] postStartSetup for "multinode-600000" (driver="hyperv")
	I0507 19:32:44.809242    6544 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0507 19:32:44.819103    6544 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0507 19:32:44.819103    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000 ).state
	I0507 19:32:46.715437    6544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:32:46.715437    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:32:46.715679    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000 ).networkadapters[0]).ipaddresses[0]
	I0507 19:32:48.957618    6544 main.go:141] libmachine: [stdout =====>] : 172.19.143.74
	
	I0507 19:32:48.957618    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:32:48.958388    6544 sshutil.go:53] new ssh client: &{IP:172.19.143.74 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-600000\id_rsa Username:docker}
	I0507 19:32:49.064775    6544 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.2453853s)
	I0507 19:32:49.072117    6544 ssh_runner.go:195] Run: cat /etc/os-release
	I0507 19:32:49.079148    6544 command_runner.go:130] > NAME=Buildroot
	I0507 19:32:49.079148    6544 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0507 19:32:49.079148    6544 command_runner.go:130] > ID=buildroot
	I0507 19:32:49.079148    6544 command_runner.go:130] > VERSION_ID=2023.02.9
	I0507 19:32:49.079148    6544 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0507 19:32:49.079148    6544 info.go:137] Remote host: Buildroot 2023.02.9
	I0507 19:32:49.079148    6544 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0507 19:32:49.079148    6544 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0507 19:32:49.080192    6544 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem -> 99922.pem in /etc/ssl/certs
	I0507 19:32:49.080192    6544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem -> /etc/ssl/certs/99922.pem
	I0507 19:32:49.088678    6544 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0507 19:32:49.105039    6544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem --> /etc/ssl/certs/99922.pem (1708 bytes)
	I0507 19:32:49.147673    6544 start.go:296] duration metric: took 4.3382483s for postStartSetup
	I0507 19:32:49.150903    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000 ).state
	I0507 19:32:51.032915    6544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:32:51.032915    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:32:51.033888    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000 ).networkadapters[0]).ipaddresses[0]
	I0507 19:32:53.286266    6544 main.go:141] libmachine: [stdout =====>] : 172.19.143.74
	
	I0507 19:32:53.286266    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:32:53.286552    6544 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-600000\config.json ...
	I0507 19:32:53.290125    6544 start.go:128] duration metric: took 1m52.4066495s to createHost
	I0507 19:32:53.290125    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000 ).state
	I0507 19:32:55.183855    6544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:32:55.184673    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:32:55.184673    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000 ).networkadapters[0]).ipaddresses[0]
	I0507 19:32:57.439412    6544 main.go:141] libmachine: [stdout =====>] : 172.19.143.74
	
	I0507 19:32:57.439486    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:32:57.444399    6544 main.go:141] libmachine: Using SSH client type: native
	I0507 19:32:57.444481    6544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.143.74 22 <nil> <nil>}
	I0507 19:32:57.444481    6544 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0507 19:32:57.581284    6544 main.go:141] libmachine: SSH cmd err, output: <nil>: 1715110377.818371653
	
	I0507 19:32:57.581284    6544 fix.go:216] guest clock: 1715110377.818371653
	I0507 19:32:57.581284    6544 fix.go:229] Guest: 2024-05-07 19:32:57.818371653 +0000 UTC Remote: 2024-05-07 19:32:53.2901256 +0000 UTC m=+117.363162701 (delta=4.528246053s)
	I0507 19:32:57.581284    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000 ).state
	I0507 19:32:59.445577    6544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:32:59.445577    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:32:59.445657    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000 ).networkadapters[0]).ipaddresses[0]
	I0507 19:33:01.731377    6544 main.go:141] libmachine: [stdout =====>] : 172.19.143.74
	
	I0507 19:33:01.732054    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:33:01.737045    6544 main.go:141] libmachine: Using SSH client type: native
	I0507 19:33:01.737644    6544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.143.74 22 <nil> <nil>}
	I0507 19:33:01.737644    6544 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1715110377
	I0507 19:33:01.893255    6544 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue May  7 19:32:57 UTC 2024
	
	I0507 19:33:01.893255    6544 fix.go:236] clock set: Tue May  7 19:32:57 UTC 2024
	 (err=<nil>)
	I0507 19:33:01.893255    6544 start.go:83] releasing machines lock for "multinode-600000", held for 2m1.0092024s
	I0507 19:33:01.893788    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000 ).state
	I0507 19:33:03.740668    6544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:33:03.740668    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:33:03.740865    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000 ).networkadapters[0]).ipaddresses[0]
	I0507 19:33:05.964616    6544 main.go:141] libmachine: [stdout =====>] : 172.19.143.74
	
	I0507 19:33:05.965256    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:33:05.968017    6544 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0507 19:33:05.968017    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000 ).state
	I0507 19:33:05.975638    6544 ssh_runner.go:195] Run: cat /version.json
	I0507 19:33:05.975638    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000 ).state
	I0507 19:33:07.960431    6544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:33:07.961046    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:33:07.961046    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000 ).networkadapters[0]).ipaddresses[0]
	I0507 19:33:07.976081    6544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:33:07.976081    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:33:07.976919    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000 ).networkadapters[0]).ipaddresses[0]
	I0507 19:33:10.299219    6544 main.go:141] libmachine: [stdout =====>] : 172.19.143.74
	
	I0507 19:33:10.299219    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:33:10.299219    6544 sshutil.go:53] new ssh client: &{IP:172.19.143.74 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-600000\id_rsa Username:docker}
	I0507 19:33:10.326962    6544 main.go:141] libmachine: [stdout =====>] : 172.19.143.74
	
	I0507 19:33:10.326962    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:33:10.326962    6544 sshutil.go:53] new ssh client: &{IP:172.19.143.74 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-600000\id_rsa Username:docker}
	I0507 19:33:10.401024    6544 command_runner.go:130] > {"iso_version": "v1.33.0-1714498396-18779", "kicbase_version": "v0.0.43-1714386659-18769", "minikube_version": "v1.33.0", "commit": "0c7995ab2d4914d5c74027eee5f5d102e19316f2"}
	I0507 19:33:10.401396    6544 ssh_runner.go:235] Completed: cat /version.json: (4.4250895s)
	I0507 19:33:10.409173    6544 ssh_runner.go:195] Run: systemctl --version
	I0507 19:33:10.474320    6544 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0507 19:33:10.474438    6544 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.506119s)
	I0507 19:33:10.474438    6544 command_runner.go:130] > systemd 252 (252)
	I0507 19:33:10.474559    6544 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0507 19:33:10.486113    6544 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0507 19:33:10.492936    6544 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0507 19:33:10.493815    6544 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0507 19:33:10.501045    6544 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0507 19:33:10.527293    6544 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0507 19:33:10.527293    6544 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0507 19:33:10.527419    6544 start.go:494] detecting cgroup driver to use...
	I0507 19:33:10.527720    6544 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0507 19:33:10.558361    6544 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0507 19:33:10.568452    6544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0507 19:33:10.594118    6544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0507 19:33:10.613506    6544 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0507 19:33:10.622704    6544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0507 19:33:10.649727    6544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0507 19:33:10.676909    6544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0507 19:33:10.705557    6544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0507 19:33:10.740441    6544 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0507 19:33:10.777685    6544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0507 19:33:10.807398    6544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0507 19:33:10.834267    6544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0507 19:33:10.862050    6544 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0507 19:33:10.878133    6544 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0507 19:33:10.890383    6544 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0507 19:33:10.916020    6544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 19:33:11.103492    6544 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0507 19:33:11.133508    6544 start.go:494] detecting cgroup driver to use...
	I0507 19:33:11.146799    6544 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0507 19:33:11.166208    6544 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0507 19:33:11.166208    6544 command_runner.go:130] > [Unit]
	I0507 19:33:11.166280    6544 command_runner.go:130] > Description=Docker Application Container Engine
	I0507 19:33:11.166280    6544 command_runner.go:130] > Documentation=https://docs.docker.com
	I0507 19:33:11.166280    6544 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0507 19:33:11.166280    6544 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0507 19:33:11.166280    6544 command_runner.go:130] > StartLimitBurst=3
	I0507 19:33:11.166280    6544 command_runner.go:130] > StartLimitIntervalSec=60
	I0507 19:33:11.166280    6544 command_runner.go:130] > [Service]
	I0507 19:33:11.166280    6544 command_runner.go:130] > Type=notify
	I0507 19:33:11.166280    6544 command_runner.go:130] > Restart=on-failure
	I0507 19:33:11.166280    6544 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0507 19:33:11.166280    6544 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0507 19:33:11.166280    6544 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0507 19:33:11.166280    6544 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0507 19:33:11.166280    6544 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0507 19:33:11.166280    6544 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0507 19:33:11.166280    6544 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0507 19:33:11.166280    6544 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0507 19:33:11.166280    6544 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0507 19:33:11.166280    6544 command_runner.go:130] > ExecStart=
	I0507 19:33:11.166280    6544 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0507 19:33:11.166280    6544 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0507 19:33:11.166280    6544 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0507 19:33:11.166280    6544 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0507 19:33:11.166280    6544 command_runner.go:130] > LimitNOFILE=infinity
	I0507 19:33:11.166280    6544 command_runner.go:130] > LimitNPROC=infinity
	I0507 19:33:11.166280    6544 command_runner.go:130] > LimitCORE=infinity
	I0507 19:33:11.166280    6544 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0507 19:33:11.166280    6544 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0507 19:33:11.166280    6544 command_runner.go:130] > TasksMax=infinity
	I0507 19:33:11.166280    6544 command_runner.go:130] > TimeoutStartSec=0
	I0507 19:33:11.166280    6544 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0507 19:33:11.166280    6544 command_runner.go:130] > Delegate=yes
	I0507 19:33:11.166280    6544 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0507 19:33:11.166280    6544 command_runner.go:130] > KillMode=process
	I0507 19:33:11.166280    6544 command_runner.go:130] > [Install]
	I0507 19:33:11.166280    6544 command_runner.go:130] > WantedBy=multi-user.target
	I0507 19:33:11.175419    6544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0507 19:33:11.206577    6544 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0507 19:33:11.237273    6544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0507 19:33:11.264488    6544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0507 19:33:11.296995    6544 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0507 19:33:11.348959    6544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0507 19:33:11.369840    6544 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0507 19:33:11.400495    6544 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0507 19:33:11.409484    6544 ssh_runner.go:195] Run: which cri-dockerd
	I0507 19:33:11.414491    6544 command_runner.go:130] > /usr/bin/cri-dockerd
	I0507 19:33:11.422478    6544 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0507 19:33:11.438488    6544 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0507 19:33:11.480086    6544 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0507 19:33:11.639589    6544 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0507 19:33:11.801903    6544 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0507 19:33:11.802189    6544 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0507 19:33:11.841345    6544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 19:33:12.008682    6544 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0507 19:33:14.479450    6544 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.470603s)
	I0507 19:33:14.491882    6544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0507 19:33:14.518720    6544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0507 19:33:14.547366    6544 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0507 19:33:14.727961    6544 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0507 19:33:14.910139    6544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 19:33:15.088093    6544 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0507 19:33:15.127577    6544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0507 19:33:15.158925    6544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 19:33:15.324965    6544 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0507 19:33:15.420758    6544 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0507 19:33:15.432288    6544 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0507 19:33:15.440019    6544 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0507 19:33:15.440019    6544 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0507 19:33:15.440019    6544 command_runner.go:130] > Device: 0,22	Inode: 882         Links: 1
	I0507 19:33:15.440019    6544 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0507 19:33:15.440019    6544 command_runner.go:130] > Access: 2024-05-07 19:33:15.587356749 +0000
	I0507 19:33:15.440019    6544 command_runner.go:130] > Modify: 2024-05-07 19:33:15.587356749 +0000
	I0507 19:33:15.440019    6544 command_runner.go:130] > Change: 2024-05-07 19:33:15.590356942 +0000
	I0507 19:33:15.440019    6544 command_runner.go:130] >  Birth: -
	I0507 19:33:15.440019    6544 start.go:562] Will wait 60s for crictl version
	I0507 19:33:15.448907    6544 ssh_runner.go:195] Run: which crictl
	I0507 19:33:15.454558    6544 command_runner.go:130] > /usr/bin/crictl
	I0507 19:33:15.462393    6544 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0507 19:33:15.508990    6544 command_runner.go:130] > Version:  0.1.0
	I0507 19:33:15.508990    6544 command_runner.go:130] > RuntimeName:  docker
	I0507 19:33:15.509459    6544 command_runner.go:130] > RuntimeVersion:  26.0.2
	I0507 19:33:15.509459    6544 command_runner.go:130] > RuntimeApiVersion:  v1
	I0507 19:33:15.509459    6544 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0507 19:33:15.516343    6544 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0507 19:33:15.542168    6544 command_runner.go:130] > 26.0.2
	I0507 19:33:15.549015    6544 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0507 19:33:15.574498    6544 command_runner.go:130] > 26.0.2
	I0507 19:33:15.579771    6544 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0507 19:33:15.580298    6544 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0507 19:33:15.584998    6544 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0507 19:33:15.584998    6544 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0507 19:33:15.584998    6544 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0507 19:33:15.584998    6544 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:a3:a5:4f Flags:up|broadcast|multicast|running}
	I0507 19:33:15.587310    6544 ip.go:210] interface addr: fe80::1edb:f5fd:c218:d8d2/64
	I0507 19:33:15.587310    6544 ip.go:210] interface addr: 172.19.128.1/20
	I0507 19:33:15.594351    6544 ssh_runner.go:195] Run: grep 172.19.128.1	host.minikube.internal$ /etc/hosts
	I0507 19:33:15.599997    6544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.128.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0507 19:33:15.618655    6544 kubeadm.go:877] updating cluster {Name:multinode-600000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.0 ClusterName:multinode-600000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.143.74 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0507 19:33:15.618856    6544 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0507 19:33:15.626506    6544 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0507 19:33:15.643748    6544 docker.go:685] Got preloaded images: 
	I0507 19:33:15.643748    6544 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.0 wasn't preloaded
	I0507 19:33:15.654161    6544 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0507 19:33:15.670137    6544 command_runner.go:139] > {"Repositories":{}}
	I0507 19:33:15.679683    6544 ssh_runner.go:195] Run: which lz4
	I0507 19:33:15.683907    6544 command_runner.go:130] > /usr/bin/lz4
	I0507 19:33:15.684718    6544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0507 19:33:15.697266    6544 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0507 19:33:15.703510    6544 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0507 19:33:15.703698    6544 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0507 19:33:15.703903    6544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359556852 bytes)
	I0507 19:33:17.217105    6544 docker.go:649] duration metric: took 1.5319421s to copy over tarball
	I0507 19:33:17.225178    6544 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0507 19:33:26.473588    6544 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (9.2477912s)
	I0507 19:33:26.473588    6544 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0507 19:33:26.533177    6544 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0507 19:33:26.550754    6544 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1":"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.12-0":"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b":"sha256:3861cfcd7c04ccac1f062788eca
39487248527ef0c0cfd477a83d7691a75a899"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.30.0":"sha256:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0","registry.k8s.io/kube-apiserver@sha256:6b8e197b2d39c321189a475ac755a77896e34b56729425590fbc99f3a96468a3":"sha256:c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.30.0":"sha256:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b","registry.k8s.io/kube-controller-manager@sha256:5f52f00f17d5784b5ca004dffca59710fa1a9eec8d54cebdf9433a1d134150fe":"sha256:c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.30.0":"sha256:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b","registry.k8s.io/kube-proxy@sha256:ec532ff47eaf39822387e51ec73f1f2502eb74658c6303319db88d2c380d0210":"sha256:a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e
07f7ac08e80ba0b"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.30.0":"sha256:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced","registry.k8s.io/kube-scheduler@sha256:2353c3a1803229970fcb571cffc9b2f120372350e01c7381b4b650c4a02b9d67":"sha256:259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.9":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097":"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c"}}}
	I0507 19:33:26.552065    6544 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0507 19:33:26.596936    6544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 19:33:26.788511    6544 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0507 19:33:30.096058    6544 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.3067764s)
	I0507 19:33:30.102914    6544 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0507 19:33:30.123368    6544 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.0
	I0507 19:33:30.123368    6544 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.0
	I0507 19:33:30.123368    6544 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.0
	I0507 19:33:30.123368    6544 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.0
	I0507 19:33:30.123368    6544 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0507 19:33:30.123368    6544 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0507 19:33:30.123368    6544 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0507 19:33:30.123368    6544 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0507 19:33:30.123368    6544 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0507 19:33:30.123368    6544 cache_images.go:84] Images are preloaded, skipping loading
	I0507 19:33:30.123368    6544 kubeadm.go:928] updating node { 172.19.143.74 8443 v1.30.0 docker true true} ...
	I0507 19:33:30.123368    6544 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-600000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.143.74
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-600000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0507 19:33:30.130968    6544 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0507 19:33:30.160528    6544 command_runner.go:130] > cgroupfs
	I0507 19:33:30.161746    6544 cni.go:84] Creating CNI manager for ""
	I0507 19:33:30.161746    6544 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0507 19:33:30.161746    6544 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0507 19:33:30.161746    6544 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.19.143.74 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-600000 NodeName:multinode-600000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.19.143.74"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.19.143.74 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0507 19:33:30.162020    6544 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.19.143.74
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-600000"
	  kubeletExtraArgs:
	    node-ip: 172.19.143.74
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.19.143.74"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0507 19:33:30.170044    6544 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0507 19:33:30.185911    6544 command_runner.go:130] > kubeadm
	I0507 19:33:30.185911    6544 command_runner.go:130] > kubectl
	I0507 19:33:30.185911    6544 command_runner.go:130] > kubelet
	I0507 19:33:30.186798    6544 binaries.go:44] Found k8s binaries, skipping transfer
	I0507 19:33:30.196342    6544 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0507 19:33:30.212207    6544 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0507 19:33:30.239459    6544 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0507 19:33:30.271618    6544 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0507 19:33:30.309716    6544 ssh_runner.go:195] Run: grep 172.19.143.74	control-plane.minikube.internal$ /etc/hosts
	I0507 19:33:30.316238    6544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.143.74	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0507 19:33:30.343175    6544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 19:33:30.504869    6544 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0507 19:33:30.530437    6544 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-600000 for IP: 172.19.143.74
	I0507 19:33:30.530437    6544 certs.go:194] generating shared ca certs ...
	I0507 19:33:30.530437    6544 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 19:33:30.531379    6544 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0507 19:33:30.531717    6544 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0507 19:33:30.531896    6544 certs.go:256] generating profile certs ...
	I0507 19:33:30.532767    6544 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-600000\client.key
	I0507 19:33:30.533004    6544 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-600000\client.crt with IP's: []
	I0507 19:33:30.894837    6544 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-600000\client.crt ...
	I0507 19:33:30.894837    6544 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-600000\client.crt: {Name:mkf8b3da70a21371b358d5fcc4d4d71f7f74ecfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 19:33:30.896529    6544 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-600000\client.key ...
	I0507 19:33:30.896529    6544 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-600000\client.key: {Name:mkaff0f4f286dcbc9c323683986fb845eda3a1de Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 19:33:30.898530    6544 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-600000\apiserver.key.bf102f0c
	I0507 19:33:30.898778    6544 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-600000\apiserver.crt.bf102f0c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.19.143.74]
	I0507 19:33:31.024082    6544 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-600000\apiserver.crt.bf102f0c ...
	I0507 19:33:31.024082    6544 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-600000\apiserver.crt.bf102f0c: {Name:mk5438d52d80344f1da0f343c6cbe9677dbd95a2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 19:33:31.025687    6544 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-600000\apiserver.key.bf102f0c ...
	I0507 19:33:31.025687    6544 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-600000\apiserver.key.bf102f0c: {Name:mkede54eb6bf763ae7af225eb9f10410aaa5c449 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 19:33:31.026952    6544 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-600000\apiserver.crt.bf102f0c -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-600000\apiserver.crt
	I0507 19:33:31.036274    6544 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-600000\apiserver.key.bf102f0c -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-600000\apiserver.key
	I0507 19:33:31.040592    6544 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-600000\proxy-client.key
	I0507 19:33:31.041514    6544 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-600000\proxy-client.crt with IP's: []
	I0507 19:33:31.257488    6544 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-600000\proxy-client.crt ...
	I0507 19:33:31.257488    6544 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-600000\proxy-client.crt: {Name:mkc7b028efc68c4f7a0d06e2fa25e72f6aa5c5c3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 19:33:31.259514    6544 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-600000\proxy-client.key ...
	I0507 19:33:31.259514    6544 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-600000\proxy-client.key: {Name:mkd3b935406f13359eddfa9ea13f6bfeb267ebbd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 19:33:31.259846    6544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0507 19:33:31.260624    6544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0507 19:33:31.260779    6544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0507 19:33:31.260915    6544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0507 19:33:31.261048    6544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-600000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0507 19:33:31.261092    6544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-600000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0507 19:33:31.261278    6544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-600000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0507 19:33:31.271463    6544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-600000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0507 19:33:31.272262    6544 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\9992.pem (1338 bytes)
	W0507 19:33:31.272557    6544 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\9992_empty.pem, impossibly tiny 0 bytes
	I0507 19:33:31.272557    6544 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0507 19:33:31.272828    6544 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0507 19:33:31.273001    6544 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0507 19:33:31.273001    6544 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0507 19:33:31.273001    6544 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem (1708 bytes)
	I0507 19:33:31.273001    6544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem -> /usr/share/ca-certificates/99922.pem
	I0507 19:33:31.273001    6544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0507 19:33:31.273001    6544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\9992.pem -> /usr/share/ca-certificates/9992.pem
	I0507 19:33:31.274202    6544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0507 19:33:31.319785    6544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0507 19:33:31.359796    6544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0507 19:33:31.400302    6544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0507 19:33:31.443904    6544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-600000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0507 19:33:31.486121    6544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-600000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0507 19:33:31.527445    6544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-600000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0507 19:33:31.568458    6544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-600000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0507 19:33:31.610122    6544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem --> /usr/share/ca-certificates/99922.pem (1708 bytes)
	I0507 19:33:31.654489    6544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0507 19:33:31.695779    6544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\9992.pem --> /usr/share/ca-certificates/9992.pem (1338 bytes)
	I0507 19:33:31.738783    6544 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0507 19:33:31.780297    6544 ssh_runner.go:195] Run: openssl version
	I0507 19:33:31.788339    6544 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0507 19:33:31.796764    6544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/99922.pem && ln -fs /usr/share/ca-certificates/99922.pem /etc/ssl/certs/99922.pem"
	I0507 19:33:31.823339    6544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/99922.pem
	I0507 19:33:31.830029    6544 command_runner.go:130] > -rw-r--r-- 1 root root 1708 May  7 18:15 /usr/share/ca-certificates/99922.pem
	I0507 19:33:31.830116    6544 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  7 18:15 /usr/share/ca-certificates/99922.pem
	I0507 19:33:31.837855    6544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/99922.pem
	I0507 19:33:31.844393    6544 command_runner.go:130] > 3ec20f2e
	I0507 19:33:31.852942    6544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/99922.pem /etc/ssl/certs/3ec20f2e.0"
	I0507 19:33:31.880478    6544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0507 19:33:31.908064    6544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0507 19:33:31.914677    6544 command_runner.go:130] > -rw-r--r-- 1 root root 1111 May  7 18:01 /usr/share/ca-certificates/minikubeCA.pem
	I0507 19:33:31.914677    6544 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  7 18:01 /usr/share/ca-certificates/minikubeCA.pem
	I0507 19:33:31.922250    6544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0507 19:33:31.930474    6544 command_runner.go:130] > b5213941
	I0507 19:33:31.938180    6544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0507 19:33:31.965029    6544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9992.pem && ln -fs /usr/share/ca-certificates/9992.pem /etc/ssl/certs/9992.pem"
	I0507 19:33:31.990975    6544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9992.pem
	I0507 19:33:31.998080    6544 command_runner.go:130] > -rw-r--r-- 1 root root 1338 May  7 18:15 /usr/share/ca-certificates/9992.pem
	I0507 19:33:31.998156    6544 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  7 18:15 /usr/share/ca-certificates/9992.pem
	I0507 19:33:32.006332    6544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9992.pem
	I0507 19:33:32.014029    6544 command_runner.go:130] > 51391683
	I0507 19:33:32.022884    6544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9992.pem /etc/ssl/certs/51391683.0"
	I0507 19:33:32.053826    6544 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0507 19:33:32.060536    6544 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0507 19:33:32.061227    6544 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0507 19:33:32.061412    6544 kubeadm.go:391] StartCluster: {Name:multinode-600000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.0 ClusterName:multinode-600000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.143.74 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0507 19:33:32.068027    6544 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0507 19:33:32.107138    6544 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0507 19:33:32.121746    6544 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0507 19:33:32.121746    6544 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0507 19:33:32.121746    6544 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0507 19:33:32.135016    6544 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0507 19:33:32.164873    6544 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0507 19:33:32.180913    6544 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0507 19:33:32.180913    6544 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0507 19:33:32.180913    6544 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0507 19:33:32.180913    6544 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0507 19:33:32.181585    6544 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0507 19:33:32.181585    6544 kubeadm.go:156] found existing configuration files:
	
	I0507 19:33:32.189608    6544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0507 19:33:32.205329    6544 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0507 19:33:32.205413    6544 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0507 19:33:32.213842    6544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0507 19:33:32.237277    6544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0507 19:33:32.252951    6544 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0507 19:33:32.253790    6544 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0507 19:33:32.262014    6544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0507 19:33:32.285244    6544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0507 19:33:32.299576    6544 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0507 19:33:32.300579    6544 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0507 19:33:32.308350    6544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0507 19:33:32.331575    6544 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0507 19:33:32.347099    6544 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0507 19:33:32.348111    6544 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0507 19:33:32.357161    6544 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0507 19:33:32.373451    6544 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0507 19:33:32.723743    6544 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0507 19:33:32.723743    6544 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0507 19:33:44.550434    6544 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0507 19:33:44.550538    6544 command_runner.go:130] > [init] Using Kubernetes version: v1.30.0
	I0507 19:33:44.550712    6544 kubeadm.go:309] [preflight] Running pre-flight checks
	I0507 19:33:44.550712    6544 command_runner.go:130] > [preflight] Running pre-flight checks
	I0507 19:33:44.551025    6544 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0507 19:33:44.551097    6544 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0507 19:33:44.551396    6544 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0507 19:33:44.551396    6544 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0507 19:33:44.551684    6544 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0507 19:33:44.551684    6544 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0507 19:33:44.551684    6544 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0507 19:33:44.551684    6544 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0507 19:33:44.557719    6544 out.go:204]   - Generating certificates and keys ...
	I0507 19:33:44.557719    6544 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0507 19:33:44.557719    6544 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0507 19:33:44.557719    6544 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0507 19:33:44.557719    6544 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0507 19:33:44.558252    6544 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0507 19:33:44.558358    6544 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0507 19:33:44.558458    6544 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0507 19:33:44.558458    6544 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0507 19:33:44.558458    6544 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0507 19:33:44.558458    6544 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0507 19:33:44.558458    6544 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0507 19:33:44.558458    6544 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0507 19:33:44.558458    6544 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0507 19:33:44.558458    6544 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0507 19:33:44.559170    6544 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-600000] and IPs [172.19.143.74 127.0.0.1 ::1]
	I0507 19:33:44.559170    6544 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-600000] and IPs [172.19.143.74 127.0.0.1 ::1]
	I0507 19:33:44.559170    6544 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0507 19:33:44.559170    6544 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0507 19:33:44.559170    6544 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-600000] and IPs [172.19.143.74 127.0.0.1 ::1]
	I0507 19:33:44.559170    6544 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-600000] and IPs [172.19.143.74 127.0.0.1 ::1]
	I0507 19:33:44.559170    6544 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0507 19:33:44.559170    6544 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0507 19:33:44.559170    6544 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0507 19:33:44.559170    6544 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0507 19:33:44.559170    6544 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0507 19:33:44.559170    6544 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0507 19:33:44.559170    6544 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0507 19:33:44.559170    6544 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0507 19:33:44.560239    6544 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0507 19:33:44.560239    6544 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0507 19:33:44.560347    6544 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0507 19:33:44.560409    6544 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0507 19:33:44.560549    6544 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0507 19:33:44.560549    6544 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0507 19:33:44.560677    6544 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0507 19:33:44.560677    6544 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0507 19:33:44.560860    6544 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0507 19:33:44.560860    6544 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0507 19:33:44.561050    6544 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0507 19:33:44.561050    6544 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0507 19:33:44.561050    6544 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0507 19:33:44.565779    6544 out.go:204]   - Booting up control plane ...
	I0507 19:33:44.561322    6544 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0507 19:33:44.565779    6544 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0507 19:33:44.565779    6544 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0507 19:33:44.566343    6544 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0507 19:33:44.566343    6544 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0507 19:33:44.566377    6544 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0507 19:33:44.566377    6544 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0507 19:33:44.566377    6544 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0507 19:33:44.566377    6544 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0507 19:33:44.566912    6544 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0507 19:33:44.566912    6544 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0507 19:33:44.567002    6544 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0507 19:33:44.567002    6544 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0507 19:33:44.567334    6544 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0507 19:33:44.567334    6544 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0507 19:33:44.567450    6544 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0507 19:33:44.567513    6544 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0507 19:33:44.567576    6544 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.002466335s
	I0507 19:33:44.567638    6544 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.002466335s
	I0507 19:33:44.567809    6544 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0507 19:33:44.567809    6544 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0507 19:33:44.567958    6544 command_runner.go:130] > [api-check] The API server is healthy after 6.007857657s
	I0507 19:33:44.567958    6544 kubeadm.go:309] [api-check] The API server is healthy after 6.007857657s
	I0507 19:33:44.568205    6544 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0507 19:33:44.568205    6544 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0507 19:33:44.568541    6544 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0507 19:33:44.568541    6544 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0507 19:33:44.568663    6544 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0507 19:33:44.568663    6544 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0507 19:33:44.569022    6544 command_runner.go:130] > [mark-control-plane] Marking the node multinode-600000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0507 19:33:44.569093    6544 kubeadm.go:309] [mark-control-plane] Marking the node multinode-600000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0507 19:33:44.569228    6544 kubeadm.go:309] [bootstrap-token] Using token: adbmrk.iuhosvxryyzjy4ec
	I0507 19:33:44.573058    6544 out.go:204]   - Configuring RBAC rules ...
	I0507 19:33:44.569377    6544 command_runner.go:130] > [bootstrap-token] Using token: adbmrk.iuhosvxryyzjy4ec
	I0507 19:33:44.573058    6544 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0507 19:33:44.573058    6544 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0507 19:33:44.573651    6544 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0507 19:33:44.573651    6544 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0507 19:33:44.574129    6544 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0507 19:33:44.574188    6544 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0507 19:33:44.574510    6544 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0507 19:33:44.574562    6544 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0507 19:33:44.574796    6544 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0507 19:33:44.574843    6544 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0507 19:33:44.575065    6544 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0507 19:33:44.575065    6544 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0507 19:33:44.575065    6544 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0507 19:33:44.575065    6544 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0507 19:33:44.575065    6544 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0507 19:33:44.575065    6544 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0507 19:33:44.575065    6544 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0507 19:33:44.575065    6544 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0507 19:33:44.575065    6544 kubeadm.go:309] 
	I0507 19:33:44.575597    6544 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0507 19:33:44.575643    6544 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0507 19:33:44.575691    6544 kubeadm.go:309] 
	I0507 19:33:44.575867    6544 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0507 19:33:44.575897    6544 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0507 19:33:44.575897    6544 kubeadm.go:309] 
	I0507 19:33:44.576012    6544 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0507 19:33:44.576060    6544 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0507 19:33:44.576189    6544 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0507 19:33:44.576189    6544 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0507 19:33:44.576332    6544 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0507 19:33:44.576382    6544 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0507 19:33:44.576382    6544 kubeadm.go:309] 
	I0507 19:33:44.576476    6544 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0507 19:33:44.576525    6544 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0507 19:33:44.576623    6544 kubeadm.go:309] 
	I0507 19:33:44.576667    6544 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0507 19:33:44.576667    6544 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0507 19:33:44.576667    6544 kubeadm.go:309] 
	I0507 19:33:44.576667    6544 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0507 19:33:44.576667    6544 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0507 19:33:44.576667    6544 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0507 19:33:44.576667    6544 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0507 19:33:44.576667    6544 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0507 19:33:44.576667    6544 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0507 19:33:44.576667    6544 kubeadm.go:309] 
	I0507 19:33:44.577256    6544 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0507 19:33:44.577256    6544 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0507 19:33:44.577256    6544 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0507 19:33:44.577256    6544 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0507 19:33:44.577256    6544 kubeadm.go:309] 
	I0507 19:33:44.577256    6544 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token adbmrk.iuhosvxryyzjy4ec \
	I0507 19:33:44.577256    6544 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token adbmrk.iuhosvxryyzjy4ec \
	I0507 19:33:44.577790    6544 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:931f752ca063cc161db9d00a66e1e235f9a673b9dc0e49228e9ec99d810de7b1 \
	I0507 19:33:44.577790    6544 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:931f752ca063cc161db9d00a66e1e235f9a673b9dc0e49228e9ec99d810de7b1 \
	I0507 19:33:44.577825    6544 kubeadm.go:309] 	--control-plane 
	I0507 19:33:44.577881    6544 command_runner.go:130] > 	--control-plane 
	I0507 19:33:44.577881    6544 kubeadm.go:309] 
	I0507 19:33:44.578033    6544 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0507 19:33:44.578094    6544 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0507 19:33:44.578094    6544 kubeadm.go:309] 
	I0507 19:33:44.578175    6544 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token adbmrk.iuhosvxryyzjy4ec \
	I0507 19:33:44.578175    6544 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token adbmrk.iuhosvxryyzjy4ec \
	I0507 19:33:44.578304    6544 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:931f752ca063cc161db9d00a66e1e235f9a673b9dc0e49228e9ec99d810de7b1 
	I0507 19:33:44.578304    6544 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:931f752ca063cc161db9d00a66e1e235f9a673b9dc0e49228e9ec99d810de7b1 
	I0507 19:33:44.578304    6544 cni.go:84] Creating CNI manager for ""
	I0507 19:33:44.578304    6544 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0507 19:33:44.581029    6544 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0507 19:33:44.592789    6544 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0507 19:33:44.599994    6544 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0507 19:33:44.599994    6544 command_runner.go:130] >   Size: 2694104   	Blocks: 5264       IO Block: 4096   regular file
	I0507 19:33:44.599994    6544 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0507 19:33:44.599994    6544 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0507 19:33:44.599994    6544 command_runner.go:130] > Access: 2024-05-07 19:32:00.694196600 +0000
	I0507 19:33:44.600081    6544 command_runner.go:130] > Modify: 2024-04-30 23:29:30.000000000 +0000
	I0507 19:33:44.600081    6544 command_runner.go:130] > Change: 2024-05-07 19:31:51.835000000 +0000
	I0507 19:33:44.600081    6544 command_runner.go:130] >  Birth: -
	I0507 19:33:44.600155    6544 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0507 19:33:44.600155    6544 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0507 19:33:44.642159    6544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0507 19:33:45.194770    6544 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0507 19:33:45.194863    6544 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0507 19:33:45.194863    6544 command_runner.go:130] > serviceaccount/kindnet created
	I0507 19:33:45.194863    6544 command_runner.go:130] > daemonset.apps/kindnet created
	I0507 19:33:45.194986    6544 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0507 19:33:45.205550    6544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 19:33:45.206954    6544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-600000 minikube.k8s.io/updated_at=2024_05_07T19_33_45_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=a2bee053733709aad5480b65159f65519e411d9f minikube.k8s.io/name=multinode-600000 minikube.k8s.io/primary=true
	I0507 19:33:45.220604    6544 command_runner.go:130] > -16
	I0507 19:33:45.220635    6544 ops.go:34] apiserver oom_adj: -16
	I0507 19:33:45.395347    6544 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0507 19:33:45.395347    6544 command_runner.go:130] > node/multinode-600000 labeled
	I0507 19:33:45.409880    6544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 19:33:45.512971    6544 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0507 19:33:45.920197    6544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 19:33:46.020559    6544 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0507 19:33:46.418650    6544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 19:33:46.522129    6544 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0507 19:33:46.905622    6544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 19:33:46.995666    6544 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0507 19:33:47.410841    6544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 19:33:47.499570    6544 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0507 19:33:47.909269    6544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 19:33:48.015534    6544 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0507 19:33:48.407352    6544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 19:33:48.495703    6544 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0507 19:33:48.911104    6544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 19:33:49.011314    6544 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0507 19:33:49.410991    6544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 19:33:49.503950    6544 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0507 19:33:49.911796    6544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 19:33:50.016342    6544 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0507 19:33:50.416321    6544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 19:33:50.517237    6544 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0507 19:33:50.915430    6544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 19:33:51.005696    6544 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0507 19:33:51.414546    6544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 19:33:51.510364    6544 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0507 19:33:51.915597    6544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 19:33:52.012462    6544 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0507 19:33:52.414670    6544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 19:33:52.504425    6544 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0507 19:33:52.911692    6544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 19:33:53.007758    6544 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0507 19:33:53.414755    6544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 19:33:53.513549    6544 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0507 19:33:53.918059    6544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 19:33:54.011195    6544 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0507 19:33:54.419948    6544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 19:33:54.516766    6544 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0507 19:33:54.912530    6544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 19:33:54.999148    6544 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0507 19:33:55.410077    6544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 19:33:55.511280    6544 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0507 19:33:55.912559    6544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 19:33:56.004238    6544 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0507 19:33:56.414356    6544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 19:33:56.508815    6544 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0507 19:33:56.912407    6544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 19:33:57.010845    6544 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0507 19:33:57.423451    6544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 19:33:57.543085    6544 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0507 19:33:57.911507    6544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 19:33:58.041562    6544 command_runner.go:130] > NAME      SECRETS   AGE
	I0507 19:33:58.041622    6544 command_runner.go:130] > default   0         1s
	I0507 19:33:58.041622    6544 kubeadm.go:1107] duration metric: took 12.8456528s to wait for elevateKubeSystemPrivileges
	W0507 19:33:58.041622    6544 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0507 19:33:58.041622    6544 kubeadm.go:393] duration metric: took 25.9784775s to StartCluster
	I0507 19:33:58.041622    6544 settings.go:142] acquiring lock: {Name:mk66ab2e0bae08b477c4ed9caa26e688e6ce3248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 19:33:58.041622    6544 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0507 19:33:58.044682    6544 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 19:33:58.045757    6544 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0507 19:33:58.045757    6544 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0507 19:33:58.046472    6544 addons.go:69] Setting storage-provisioner=true in profile "multinode-600000"
	I0507 19:33:58.046532    6544 addons.go:69] Setting default-storageclass=true in profile "multinode-600000"
	I0507 19:33:58.046532    6544 addons.go:234] Setting addon storage-provisioner=true in "multinode-600000"
	I0507 19:33:58.046590    6544 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-600000"
	I0507 19:33:58.046692    6544 host.go:66] Checking if "multinode-600000" exists ...
	I0507 19:33:58.046820    6544 config.go:182] Loaded profile config "multinode-600000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 19:33:58.045757    6544 start.go:234] Will wait 6m0s for node &{Name: IP:172.19.143.74 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0507 19:33:58.050134    6544 out.go:177] * Verifying Kubernetes components...
	I0507 19:33:58.047137    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000 ).state
	I0507 19:33:58.048348    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000 ).state
	I0507 19:33:58.064733    6544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 19:33:58.366203    6544 command_runner.go:130] > apiVersion: v1
	I0507 19:33:58.366203    6544 command_runner.go:130] > data:
	I0507 19:33:58.366203    6544 command_runner.go:130] >   Corefile: |
	I0507 19:33:58.366203    6544 command_runner.go:130] >     .:53 {
	I0507 19:33:58.366203    6544 command_runner.go:130] >         errors
	I0507 19:33:58.366203    6544 command_runner.go:130] >         health {
	I0507 19:33:58.366203    6544 command_runner.go:130] >            lameduck 5s
	I0507 19:33:58.366203    6544 command_runner.go:130] >         }
	I0507 19:33:58.366203    6544 command_runner.go:130] >         ready
	I0507 19:33:58.366203    6544 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0507 19:33:58.366203    6544 command_runner.go:130] >            pods insecure
	I0507 19:33:58.366203    6544 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0507 19:33:58.366203    6544 command_runner.go:130] >            ttl 30
	I0507 19:33:58.366203    6544 command_runner.go:130] >         }
	I0507 19:33:58.366203    6544 command_runner.go:130] >         prometheus :9153
	I0507 19:33:58.366203    6544 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0507 19:33:58.366203    6544 command_runner.go:130] >            max_concurrent 1000
	I0507 19:33:58.366203    6544 command_runner.go:130] >         }
	I0507 19:33:58.366203    6544 command_runner.go:130] >         cache 30
	I0507 19:33:58.366203    6544 command_runner.go:130] >         loop
	I0507 19:33:58.366203    6544 command_runner.go:130] >         reload
	I0507 19:33:58.366203    6544 command_runner.go:130] >         loadbalance
	I0507 19:33:58.366203    6544 command_runner.go:130] >     }
	I0507 19:33:58.366203    6544 command_runner.go:130] > kind: ConfigMap
	I0507 19:33:58.366203    6544 command_runner.go:130] > metadata:
	I0507 19:33:58.366203    6544 command_runner.go:130] >   creationTimestamp: "2024-05-07T19:33:44Z"
	I0507 19:33:58.366203    6544 command_runner.go:130] >   name: coredns
	I0507 19:33:58.366203    6544 command_runner.go:130] >   namespace: kube-system
	I0507 19:33:58.366203    6544 command_runner.go:130] >   resourceVersion: "256"
	I0507 19:33:58.366203    6544 command_runner.go:130] >   uid: 9d94adff-6109-4bab-ad1e-f54fd1157894
	I0507 19:33:58.366203    6544 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.19.128.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0507 19:33:58.479095    6544 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0507 19:33:58.987739    6544 command_runner.go:130] > configmap/coredns replaced
	I0507 19:33:58.987739    6544 start.go:946] {"host.minikube.internal": 172.19.128.1} host record injected into CoreDNS's ConfigMap
	I0507 19:33:58.989083    6544 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0507 19:33:58.989083    6544 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0507 19:33:58.989794    6544 kapi.go:59] client config for multinode-600000: &rest.Config{Host:"https://172.19.143.74:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-600000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-600000\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2655b00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0507 19:33:58.989794    6544 kapi.go:59] client config for multinode-600000: &rest.Config{Host:"https://172.19.143.74:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-600000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-600000\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2655b00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0507 19:33:58.991042    6544 cert_rotation.go:137] Starting client certificate rotation controller
	I0507 19:33:58.991042    6544 node_ready.go:35] waiting up to 6m0s for node "multinode-600000" to be "Ready" ...
	I0507 19:33:58.991669    6544 round_trippers.go:463] GET https://172.19.143.74:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0507 19:33:58.991726    6544 round_trippers.go:469] Request Headers:
	I0507 19:33:58.991726    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:33:58.991774    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:33:58.991774    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000
	I0507 19:33:58.991774    6544 round_trippers.go:469] Request Headers:
	I0507 19:33:58.991774    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:33:58.991774    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:33:59.010298    6544 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0507 19:33:59.010442    6544 round_trippers.go:577] Response Headers:
	I0507 19:33:59.010442    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:33:59.010442    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:33:59 GMT
	I0507 19:33:59.010535    6544 round_trippers.go:580]     Audit-Id: 7fa8535c-fb5c-4b4f-9f98-d5049a070322
	I0507 19:33:59.010535    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:33:59.010535    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:33:59.010535    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:33:59.010593    6544 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0507 19:33:59.010593    6544 round_trippers.go:577] Response Headers:
	I0507 19:33:59.010670    6544 round_trippers.go:580]     Audit-Id: 0ffcaa9f-1d17-4bc0-bce7-3c479a89ff4c
	I0507 19:33:59.010670    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:33:59.010670    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:33:59.010670    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:33:59.010670    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:33:59.010670    6544 round_trippers.go:580]     Content-Length: 291
	I0507 19:33:59.010670    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:33:59 GMT
	I0507 19:33:59.010670    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"343","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0507 19:33:59.010670    6544 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"94e31cfb-cfd8-4efb-9273-3ad92d8a2444","resourceVersion":"396","creationTimestamp":"2024-05-07T19:33:44Z"},"spec":{"replicas":2},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0507 19:33:59.011262    6544 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"94e31cfb-cfd8-4efb-9273-3ad92d8a2444","resourceVersion":"396","creationTimestamp":"2024-05-07T19:33:44Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0507 19:33:59.011809    6544 round_trippers.go:463] PUT https://172.19.143.74:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0507 19:33:59.011866    6544 round_trippers.go:469] Request Headers:
	I0507 19:33:59.011866    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:33:59.011866    6544 round_trippers.go:473]     Content-Type: application/json
	I0507 19:33:59.011866    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:33:59.033887    6544 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0507 19:33:59.033887    6544 round_trippers.go:577] Response Headers:
	I0507 19:33:59.033887    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:33:59.033990    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:33:59.033990    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:33:59.033990    6544 round_trippers.go:580]     Content-Length: 291
	I0507 19:33:59.033990    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:33:59 GMT
	I0507 19:33:59.033990    6544 round_trippers.go:580]     Audit-Id: da90f16e-cf79-4e73-b86c-8d96657b67b4
	I0507 19:33:59.033990    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:33:59.034042    6544 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"94e31cfb-cfd8-4efb-9273-3ad92d8a2444","resourceVersion":"399","creationTimestamp":"2024-05-07T19:33:44Z"},"spec":{"replicas":1},"status":{"replicas":2,"selector":"k8s-app=kube-dns"}}
	I0507 19:33:59.500057    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000
	I0507 19:33:59.500057    6544 round_trippers.go:469] Request Headers:
	I0507 19:33:59.500057    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:33:59.500057    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:33:59.500057    6544 round_trippers.go:463] GET https://172.19.143.74:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0507 19:33:59.500057    6544 round_trippers.go:469] Request Headers:
	I0507 19:33:59.500057    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:33:59.500057    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:33:59.503604    6544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:33:59.503659    6544 round_trippers.go:577] Response Headers:
	I0507 19:33:59.504354    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:33:59 GMT
	I0507 19:33:59.504354    6544 round_trippers.go:580]     Audit-Id: 94be0edb-38b8-43fe-9dfe-67e300605117
	I0507 19:33:59.504354    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:33:59.504354    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:33:59.504354    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:33:59.504354    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:33:59.504354    6544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:33:59.504354    6544 round_trippers.go:577] Response Headers:
	I0507 19:33:59.504354    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:33:59.504354    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:33:59.504354    6544 round_trippers.go:580]     Content-Length: 291
	I0507 19:33:59.504354    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:33:59 GMT
	I0507 19:33:59.504354    6544 round_trippers.go:580]     Audit-Id: b7a82aa1-6636-4340-a637-5a38ff050698
	I0507 19:33:59.504354    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:33:59.504354    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:33:59.504354    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"343","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0507 19:33:59.504354    6544 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"94e31cfb-cfd8-4efb-9273-3ad92d8a2444","resourceVersion":"410","creationTimestamp":"2024-05-07T19:33:44Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0507 19:33:59.504354    6544 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-600000" context rescaled to 1 replicas
	I0507 19:33:59.992755    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000
	I0507 19:33:59.992755    6544 round_trippers.go:469] Request Headers:
	I0507 19:33:59.992755    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:33:59.992755    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:33:59.995774    6544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:33:59.995774    6544 round_trippers.go:577] Response Headers:
	I0507 19:33:59.995774    6544 round_trippers.go:580]     Audit-Id: 74838fec-f3b3-48ce-b771-3e546fe78366
	I0507 19:33:59.995774    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:33:59.995774    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:33:59.995774    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:33:59.995774    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:33:59.995774    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:34:00 GMT
	I0507 19:33:59.995774    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"343","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0507 19:34:00.109628    6544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:34:00.110257    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:34:00.110850    6544 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0507 19:34:00.111439    6544 kapi.go:59] client config for multinode-600000: &rest.Config{Host:"https://172.19.143.74:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-600000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-600000\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2655b00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0507 19:34:00.112022    6544 addons.go:234] Setting addon default-storageclass=true in "multinode-600000"
	I0507 19:34:00.112022    6544 host.go:66] Checking if "multinode-600000" exists ...
	I0507 19:34:00.112619    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000 ).state
	I0507 19:34:00.116142    6544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:34:00.116142    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:34:00.119200    6544 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0507 19:34:00.121688    6544 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0507 19:34:00.121752    6544 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0507 19:34:00.121752    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000 ).state
	I0507 19:34:00.498448    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000
	I0507 19:34:00.498524    6544 round_trippers.go:469] Request Headers:
	I0507 19:34:00.498524    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:34:00.498524    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:34:00.501236    6544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:34:00.502081    6544 round_trippers.go:577] Response Headers:
	I0507 19:34:00.502141    6544 round_trippers.go:580]     Audit-Id: 59a85258-ccef-42b6-a7a6-4491281a9519
	I0507 19:34:00.502141    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:34:00.502141    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:34:00.502141    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:34:00.502141    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:34:00.502239    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:34:00 GMT
	I0507 19:34:00.502590    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"343","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0507 19:34:00.991604    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000
	I0507 19:34:00.991604    6544 round_trippers.go:469] Request Headers:
	I0507 19:34:00.991604    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:34:00.991604    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:34:00.998520    6544 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0507 19:34:00.998520    6544 round_trippers.go:577] Response Headers:
	I0507 19:34:00.998520    6544 round_trippers.go:580]     Audit-Id: 9a3c3c5b-340e-482f-89bd-580239fcdf53
	I0507 19:34:00.998520    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:34:00.998520    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:34:00.998520    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:34:00.998520    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:34:00.998520    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:34:01 GMT
	I0507 19:34:00.998520    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"343","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0507 19:34:00.999742    6544 node_ready.go:53] node "multinode-600000" has status "Ready":"False"
	I0507 19:34:01.498919    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000
	I0507 19:34:01.498919    6544 round_trippers.go:469] Request Headers:
	I0507 19:34:01.498919    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:34:01.498919    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:34:01.502140    6544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:34:01.502140    6544 round_trippers.go:577] Response Headers:
	I0507 19:34:01.502140    6544 round_trippers.go:580]     Audit-Id: 565b82f0-6484-4fe2-8544-e707653d416f
	I0507 19:34:01.502140    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:34:01.502140    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:34:01.502140    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:34:01.502140    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:34:01.502140    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:34:01 GMT
	I0507 19:34:01.502704    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"343","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0507 19:34:01.992236    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000
	I0507 19:34:01.992236    6544 round_trippers.go:469] Request Headers:
	I0507 19:34:01.992236    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:34:01.992236    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:34:01.995050    6544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:34:01.995398    6544 round_trippers.go:577] Response Headers:
	I0507 19:34:01.995398    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:34:02 GMT
	I0507 19:34:01.995398    6544 round_trippers.go:580]     Audit-Id: ede03625-1512-45d6-9c46-45c3f5198d41
	I0507 19:34:01.995398    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:34:01.995398    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:34:01.995398    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:34:01.995398    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:34:01.997076    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"343","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0507 19:34:02.225217    6544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:34:02.225217    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:34:02.225217    6544 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0507 19:34:02.225217    6544 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0507 19:34:02.225217    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000 ).state
	I0507 19:34:02.277239    6544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:34:02.277239    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:34:02.277239    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000 ).networkadapters[0]).ipaddresses[0]
	I0507 19:34:02.502519    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000
	I0507 19:34:02.502567    6544 round_trippers.go:469] Request Headers:
	I0507 19:34:02.502567    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:34:02.502567    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:34:02.507337    6544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:34:02.507337    6544 round_trippers.go:577] Response Headers:
	I0507 19:34:02.507337    6544 round_trippers.go:580]     Audit-Id: 0e207ae0-ff25-4b8d-b224-32918c633991
	I0507 19:34:02.507337    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:34:02.507337    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:34:02.507337    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:34:02.507337    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:34:02.507337    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:34:02 GMT
	I0507 19:34:02.507337    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"343","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0507 19:34:02.994075    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000
	I0507 19:34:02.994075    6544 round_trippers.go:469] Request Headers:
	I0507 19:34:02.994075    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:34:02.994075    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:34:02.998659    6544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:34:02.998659    6544 round_trippers.go:577] Response Headers:
	I0507 19:34:02.998659    6544 round_trippers.go:580]     Audit-Id: 30890dbd-2b18-4b93-b2df-17fdd020b465
	I0507 19:34:02.998659    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:34:02.998659    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:34:02.998659    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:34:02.998659    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:34:02.998659    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:34:03 GMT
	I0507 19:34:02.998659    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"343","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0507 19:34:03.503770    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000
	I0507 19:34:03.503770    6544 round_trippers.go:469] Request Headers:
	I0507 19:34:03.503770    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:34:03.503770    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:34:03.510662    6544 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0507 19:34:03.510662    6544 round_trippers.go:577] Response Headers:
	I0507 19:34:03.510662    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:34:03 GMT
	I0507 19:34:03.510662    6544 round_trippers.go:580]     Audit-Id: 4f05bc2a-cfa3-4693-880f-a570b1f0c8a1
	I0507 19:34:03.510662    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:34:03.510662    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:34:03.510662    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:34:03.510662    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:34:03.510662    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"343","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0507 19:34:03.510662    6544 node_ready.go:53] node "multinode-600000" has status "Ready":"False"
	I0507 19:34:03.994390    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000
	I0507 19:34:03.994390    6544 round_trippers.go:469] Request Headers:
	I0507 19:34:03.994390    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:34:03.994390    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:34:03.998204    6544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:34:03.998204    6544 round_trippers.go:577] Response Headers:
	I0507 19:34:03.998291    6544 round_trippers.go:580]     Audit-Id: d13f599e-1a82-4dc7-a110-0f52a9a51906
	I0507 19:34:03.998291    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:34:03.998309    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:34:03.998338    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:34:03.998338    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:34:03.998338    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:34:04 GMT
	I0507 19:34:03.998686    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"343","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0507 19:34:04.318772    6544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:34:04.319411    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:34:04.319492    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000 ).networkadapters[0]).ipaddresses[0]
	I0507 19:34:04.492639    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000
	I0507 19:34:04.492639    6544 round_trippers.go:469] Request Headers:
	I0507 19:34:04.492639    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:34:04.492639    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:34:04.496624    6544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:34:04.496624    6544 round_trippers.go:577] Response Headers:
	I0507 19:34:04.496624    6544 round_trippers.go:580]     Audit-Id: 5e09e15c-ca39-4f7b-8511-ea270f7b2924
	I0507 19:34:04.496624    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:34:04.496624    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:34:04.496624    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:34:04.496624    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:34:04.496624    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:34:04 GMT
	I0507 19:34:04.496624    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"343","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0507 19:34:04.778529    6544 main.go:141] libmachine: [stdout =====>] : 172.19.143.74
	
	I0507 19:34:04.778529    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:34:04.778529    6544 sshutil.go:53] new ssh client: &{IP:172.19.143.74 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-600000\id_rsa Username:docker}
	I0507 19:34:04.927293    6544 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0507 19:34:04.997649    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000
	I0507 19:34:04.997649    6544 round_trippers.go:469] Request Headers:
	I0507 19:34:04.997649    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:34:04.997649    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:34:05.000385    6544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:34:05.000385    6544 round_trippers.go:577] Response Headers:
	I0507 19:34:05.000385    6544 round_trippers.go:580]     Audit-Id: c9b55d91-ccbd-4956-828f-af6e5113f4f3
	I0507 19:34:05.000385    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:34:05.000385    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:34:05.000950    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:34:05.000950    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:34:05.000950    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:34:05 GMT
	I0507 19:34:05.001160    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"343","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0507 19:34:05.472474    6544 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0507 19:34:05.472547    6544 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0507 19:34:05.472547    6544 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0507 19:34:05.472547    6544 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0507 19:34:05.472547    6544 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0507 19:34:05.472547    6544 command_runner.go:130] > pod/storage-provisioner created
	I0507 19:34:05.504742    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000
	I0507 19:34:05.504833    6544 round_trippers.go:469] Request Headers:
	I0507 19:34:05.504833    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:34:05.504833    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:34:05.507570    6544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:34:05.507570    6544 round_trippers.go:577] Response Headers:
	I0507 19:34:05.507570    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:34:05.507570    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:34:05.507570    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:34:05.507570    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:34:05 GMT
	I0507 19:34:05.507570    6544 round_trippers.go:580]     Audit-Id: e06182d7-6be5-4301-b2c1-f053e7c3774b
	I0507 19:34:05.507570    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:34:05.507570    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"343","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0507 19:34:05.996487    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000
	I0507 19:34:05.996565    6544 round_trippers.go:469] Request Headers:
	I0507 19:34:05.996565    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:34:05.996565    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:34:05.999208    6544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:34:05.999208    6544 round_trippers.go:577] Response Headers:
	I0507 19:34:05.999208    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:34:06 GMT
	I0507 19:34:05.999208    6544 round_trippers.go:580]     Audit-Id: 1aff428b-9d1f-42ba-868d-a966b34e1fa4
	I0507 19:34:05.999208    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:34:05.999208    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:34:05.999208    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:34:05.999208    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:34:06.000495    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"343","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0507 19:34:06.000817    6544 node_ready.go:53] node "multinode-600000" has status "Ready":"False"
	I0507 19:34:06.502588    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000
	I0507 19:34:06.502588    6544 round_trippers.go:469] Request Headers:
	I0507 19:34:06.502588    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:34:06.502588    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:34:06.506502    6544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:34:06.506557    6544 round_trippers.go:577] Response Headers:
	I0507 19:34:06.506557    6544 round_trippers.go:580]     Audit-Id: d8efa748-da2f-4a82-a5cb-a6406eb743bf
	I0507 19:34:06.506557    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:34:06.506557    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:34:06.506557    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:34:06.506557    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:34:06.506557    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:34:06 GMT
	I0507 19:34:06.506557    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"343","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0507 19:34:06.697835    6544 main.go:141] libmachine: [stdout =====>] : 172.19.143.74
	
	I0507 19:34:06.698628    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:34:06.698683    6544 sshutil.go:53] new ssh client: &{IP:172.19.143.74 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-600000\id_rsa Username:docker}
	I0507 19:34:06.831251    6544 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0507 19:34:06.978127    6544 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0507 19:34:06.978543    6544 round_trippers.go:463] GET https://172.19.143.74:8443/apis/storage.k8s.io/v1/storageclasses
	I0507 19:34:06.978658    6544 round_trippers.go:469] Request Headers:
	I0507 19:34:06.978658    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:34:06.978658    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:34:06.989545    6544 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0507 19:34:06.989545    6544 round_trippers.go:577] Response Headers:
	I0507 19:34:06.989545    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:34:06.989545    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:34:06.989545    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:34:06.989545    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:34:06.989545    6544 round_trippers.go:580]     Content-Length: 1273
	I0507 19:34:06.989545    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:34:07 GMT
	I0507 19:34:06.989545    6544 round_trippers.go:580]     Audit-Id: 85e0569f-66fc-49e5-8af6-1c6be3f05c69
	I0507 19:34:06.989545    6544 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"436"},"items":[{"metadata":{"name":"standard","uid":"ca35a4a0-e3e3-4f55-b661-767926dbacf3","resourceVersion":"436","creationTimestamp":"2024-05-07T19:34:07Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-07T19:34:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kuberne
tes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is- [truncated 249 chars]
	I0507 19:34:06.990421    6544 request.go:1212] Request Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"ca35a4a0-e3e3-4f55-b661-767926dbacf3","resourceVersion":"436","creationTimestamp":"2024-05-07T19:34:07Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-07T19:34:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclas
s.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0507 19:34:06.990421    6544 round_trippers.go:463] PUT https://172.19.143.74:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0507 19:34:06.990421    6544 round_trippers.go:469] Request Headers:
	I0507 19:34:06.990547    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:34:06.990547    6544 round_trippers.go:473]     Content-Type: application/json
	I0507 19:34:06.990547    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:34:06.993837    6544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:34:06.993837    6544 round_trippers.go:577] Response Headers:
	I0507 19:34:06.993837    6544 round_trippers.go:580]     Audit-Id: a43ace7f-4e19-4861-8c4f-9680295921c7
	I0507 19:34:06.993837    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:34:06.993837    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:34:06.993837    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000
	I0507 19:34:06.993837    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:34:06.993837    6544 round_trippers.go:469] Request Headers:
	I0507 19:34:06.993837    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:34:06.993837    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:34:06.993837    6544 round_trippers.go:580]     Content-Length: 1220
	I0507 19:34:06.993837    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:34:06.993837    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:34:07 GMT
	I0507 19:34:06.994611    6544 request.go:1212] Response Body: {"kind":"StorageClass","apiVersion":"storage.k8s.io/v1","metadata":{"name":"standard","uid":"ca35a4a0-e3e3-4f55-b661-767926dbacf3","resourceVersion":"436","creationTimestamp":"2024-05-07T19:34:07Z","labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"annotations":{"kubectl.kubernetes.io/last-applied-configuration":"{\"apiVersion\":\"storage.k8s.io/v1\",\"kind\":\"StorageClass\",\"metadata\":{\"annotations\":{\"storageclass.kubernetes.io/is-default-class\":\"true\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"EnsureExists\"},\"name\":\"standard\"},\"provisioner\":\"k8s.io/minikube-hostpath\"}\n","storageclass.kubernetes.io/is-default-class":"true"},"managedFields":[{"manager":"kubectl-client-side-apply","operation":"Update","apiVersion":"storage.k8s.io/v1","time":"2024-05-07T19:34:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storagecla
ss.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanag [truncated 196 chars]
	I0507 19:34:06.999552    6544 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0507 19:34:07.001905    6544 addons.go:505] duration metric: took 8.9555512s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0507 19:34:07.000541    6544 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0507 19:34:07.001905    6544 round_trippers.go:577] Response Headers:
	I0507 19:34:07.001905    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:34:07.001905    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:34:07 GMT
	I0507 19:34:07.001905    6544 round_trippers.go:580]     Audit-Id: 933eddc3-7908-4ed5-ae45-2d4098f162f4
	I0507 19:34:07.001905    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:34:07.001905    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:34:07.001905    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:34:07.001905    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"343","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0507 19:34:07.504948    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000
	I0507 19:34:07.504948    6544 round_trippers.go:469] Request Headers:
	I0507 19:34:07.504948    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:34:07.504948    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:34:07.508349    6544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:34:07.508349    6544 round_trippers.go:577] Response Headers:
	I0507 19:34:07.508349    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:34:07.508349    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:34:07 GMT
	I0507 19:34:07.508349    6544 round_trippers.go:580]     Audit-Id: 2c89ef16-ee3c-4053-9db7-b61bce4ac5f7
	I0507 19:34:07.508349    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:34:07.509339    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:34:07.509339    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:34:07.509626    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"343","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0507 19:34:07.993905    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000
	I0507 19:34:07.993992    6544 round_trippers.go:469] Request Headers:
	I0507 19:34:07.993992    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:34:07.993992    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:34:07.997403    6544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:34:07.997403    6544 round_trippers.go:577] Response Headers:
	I0507 19:34:07.997403    6544 round_trippers.go:580]     Audit-Id: ca033502-b844-4430-bbe9-8860227f5c0c
	I0507 19:34:07.997403    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:34:07.997403    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:34:07.997403    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:34:07.997403    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:34:07.997403    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:34:08 GMT
	I0507 19:34:07.998141    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"343","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0507 19:34:08.493195    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000
	I0507 19:34:08.493195    6544 round_trippers.go:469] Request Headers:
	I0507 19:34:08.493195    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:34:08.493195    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:34:08.497264    6544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:34:08.497264    6544 round_trippers.go:577] Response Headers:
	I0507 19:34:08.497264    6544 round_trippers.go:580]     Audit-Id: 4510c9c0-62ae-4b86-9c82-8ca2f2fcf998
	I0507 19:34:08.497264    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:34:08.497264    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:34:08.497264    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:34:08.497264    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:34:08.497264    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:34:08 GMT
	I0507 19:34:08.497264    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"343","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0507 19:34:08.498201    6544 node_ready.go:53] node "multinode-600000" has status "Ready":"False"
	I0507 19:34:09.007282    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000
	I0507 19:34:09.007282    6544 round_trippers.go:469] Request Headers:
	I0507 19:34:09.007378    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:34:09.007378    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:34:09.010611    6544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:34:09.010611    6544 round_trippers.go:577] Response Headers:
	I0507 19:34:09.010611    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:34:09.010611    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:34:09.010611    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:34:09 GMT
	I0507 19:34:09.010611    6544 round_trippers.go:580]     Audit-Id: 8e67ac1e-885d-4560-8f26-b275d2133fa8
	I0507 19:34:09.010611    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:34:09.011186    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:34:09.011463    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"343","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0507 19:34:09.492624    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000
	I0507 19:34:09.492624    6544 round_trippers.go:469] Request Headers:
	I0507 19:34:09.492885    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:34:09.492885    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:34:09.496337    6544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:34:09.496337    6544 round_trippers.go:577] Response Headers:
	I0507 19:34:09.496337    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:34:09.496433    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:34:09 GMT
	I0507 19:34:09.496433    6544 round_trippers.go:580]     Audit-Id: 5332673b-60bf-48b1-b585-9edbcc8a8f52
	I0507 19:34:09.496433    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:34:09.496433    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:34:09.496433    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:34:09.496521    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"343","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0507 19:34:09.997277    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000
	I0507 19:34:09.997363    6544 round_trippers.go:469] Request Headers:
	I0507 19:34:09.997363    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:34:09.997363    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:34:10.000662    6544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:34:10.000662    6544 round_trippers.go:577] Response Headers:
	I0507 19:34:10.000662    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:34:10.000662    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:34:10 GMT
	I0507 19:34:10.000662    6544 round_trippers.go:580]     Audit-Id: 3d48cd77-afac-46fe-a49f-d91d52ed89f6
	I0507 19:34:10.000662    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:34:10.000662    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:34:10.000662    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:34:10.001346    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"343","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","fi [truncated 4928 chars]
	I0507 19:34:10.498035    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000
	I0507 19:34:10.498096    6544 round_trippers.go:469] Request Headers:
	I0507 19:34:10.498096    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:34:10.498096    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:34:10.505688    6544 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0507 19:34:10.505688    6544 round_trippers.go:577] Response Headers:
	I0507 19:34:10.505688    6544 round_trippers.go:580]     Audit-Id: da2ecead-870a-4ad7-948e-b1abaa3aacc6
	I0507 19:34:10.505688    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:34:10.505688    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:34:10.505688    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:34:10.505688    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:34:10.505688    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:34:10 GMT
	I0507 19:34:10.506378    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"439","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0507 19:34:10.506378    6544 node_ready.go:49] node "multinode-600000" has status "Ready":"True"
	I0507 19:34:10.506378    6544 node_ready.go:38] duration metric: took 11.5145687s for node "multinode-600000" to be "Ready" ...
	I0507 19:34:10.506378    6544 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0507 19:34:10.506378    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/namespaces/kube-system/pods
	I0507 19:34:10.506378    6544 round_trippers.go:469] Request Headers:
	I0507 19:34:10.506378    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:34:10.506378    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:34:10.527628    6544 round_trippers.go:574] Response Status: 200 OK in 21 milliseconds
	I0507 19:34:10.527628    6544 round_trippers.go:577] Response Headers:
	I0507 19:34:10.527628    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:34:10.527815    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:34:10 GMT
	I0507 19:34:10.527815    6544 round_trippers.go:580]     Audit-Id: cc44567d-360a-4a39-92e9-bbfa3917a357
	I0507 19:34:10.527815    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:34:10.527815    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:34:10.527815    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:34:10.529349    6544 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"444"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"441","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 54652 chars]
	I0507 19:34:10.533506    6544 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-5j966" in "kube-system" namespace to be "Ready" ...
	I0507 19:34:10.533506    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5j966
	I0507 19:34:10.533506    6544 round_trippers.go:469] Request Headers:
	I0507 19:34:10.533506    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:34:10.533506    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:34:10.536068    6544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:34:10.536068    6544 round_trippers.go:577] Response Headers:
	I0507 19:34:10.536068    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:34:10.536068    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:34:10.536068    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:34:10.536068    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:34:10 GMT
	I0507 19:34:10.536068    6544 round_trippers.go:580]     Audit-Id: 5a9d996d-5893-454f-97c8-00cd1655ed2c
	I0507 19:34:10.536068    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:34:10.536688    6544 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"445","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0507 19:34:10.537445    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000
	I0507 19:34:10.537445    6544 round_trippers.go:469] Request Headers:
	I0507 19:34:10.537505    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:34:10.537505    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:34:10.540298    6544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:34:10.540298    6544 round_trippers.go:577] Response Headers:
	I0507 19:34:10.540298    6544 round_trippers.go:580]     Audit-Id: bdcceeca-ce3f-48ae-848f-45469c4bb61c
	I0507 19:34:10.540298    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:34:10.540298    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:34:10.540298    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:34:10.540298    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:34:10.540298    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:34:10 GMT
	I0507 19:34:10.540298    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"439","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0507 19:34:11.048357    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5j966
	I0507 19:34:11.048452    6544 round_trippers.go:469] Request Headers:
	I0507 19:34:11.048452    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:34:11.048452    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:34:11.052229    6544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:34:11.052229    6544 round_trippers.go:577] Response Headers:
	I0507 19:34:11.052229    6544 round_trippers.go:580]     Audit-Id: de0bad96-2687-49e2-9c61-3f9e1aea630c
	I0507 19:34:11.052229    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:34:11.052229    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:34:11.052229    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:34:11.052229    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:34:11.052229    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:34:11 GMT
	I0507 19:34:11.052686    6544 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"445","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0507 19:34:11.054264    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000
	I0507 19:34:11.054264    6544 round_trippers.go:469] Request Headers:
	I0507 19:34:11.054321    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:34:11.054321    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:34:11.056561    6544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:34:11.056561    6544 round_trippers.go:577] Response Headers:
	I0507 19:34:11.056561    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:34:11.056561    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:34:11 GMT
	I0507 19:34:11.056561    6544 round_trippers.go:580]     Audit-Id: 4669a21d-55d8-48b0-bd4f-e3e0fe390c9a
	I0507 19:34:11.056561    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:34:11.056561    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:34:11.056561    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:34:11.056910    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"439","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0507 19:34:11.540639    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5j966
	I0507 19:34:11.540639    6544 round_trippers.go:469] Request Headers:
	I0507 19:34:11.540639    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:34:11.540639    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:34:11.544205    6544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:34:11.544205    6544 round_trippers.go:577] Response Headers:
	I0507 19:34:11.544205    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:34:11.544205    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:34:11 GMT
	I0507 19:34:11.544205    6544 round_trippers.go:580]     Audit-Id: 7a976c11-f744-4100-90f0-ddb5b46d9ba4
	I0507 19:34:11.544205    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:34:11.544205    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:34:11.544205    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:34:11.544959    6544 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"445","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0507 19:34:11.545897    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000
	I0507 19:34:11.546089    6544 round_trippers.go:469] Request Headers:
	I0507 19:34:11.546089    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:34:11.546089    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:34:11.553041    6544 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0507 19:34:11.553041    6544 round_trippers.go:577] Response Headers:
	I0507 19:34:11.553041    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:34:11.553041    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:34:11 GMT
	I0507 19:34:11.553041    6544 round_trippers.go:580]     Audit-Id: 8edc416e-9fac-4faf-a503-c9b344d1b997
	I0507 19:34:11.553041    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:34:11.553041    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:34:11.553041    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:34:11.553761    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"439","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0507 19:34:12.047923    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5j966
	I0507 19:34:12.048305    6544 round_trippers.go:469] Request Headers:
	I0507 19:34:12.048305    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:34:12.048305    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:34:12.054450    6544 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0507 19:34:12.054450    6544 round_trippers.go:577] Response Headers:
	I0507 19:34:12.054450    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:34:12.054450    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:34:12.054450    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:34:12.054450    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:34:12.054450    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:34:12 GMT
	I0507 19:34:12.054450    6544 round_trippers.go:580]     Audit-Id: 4e9cf594-3d4b-4937-a183-f99efdd0d620
	I0507 19:34:12.054999    6544 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"445","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0507 19:34:12.055910    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000
	I0507 19:34:12.055910    6544 round_trippers.go:469] Request Headers:
	I0507 19:34:12.055910    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:34:12.055910    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:34:12.058516    6544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:34:12.059551    6544 round_trippers.go:577] Response Headers:
	I0507 19:34:12.059551    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:34:12.059620    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:34:12.059620    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:34:12.059620    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:34:12.059620    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:34:12 GMT
	I0507 19:34:12.059620    6544 round_trippers.go:580]     Audit-Id: c2ea7ff1-e2c9-4bed-9e0b-157b89862e9a
	I0507 19:34:12.060395    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"439","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0507 19:34:12.549324    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5j966
	I0507 19:34:12.549324    6544 round_trippers.go:469] Request Headers:
	I0507 19:34:12.549412    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:34:12.549412    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:34:12.552832    6544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:34:12.552832    6544 round_trippers.go:577] Response Headers:
	I0507 19:34:12.552832    6544 round_trippers.go:580]     Audit-Id: 17a696ae-2eb3-4847-be65-39ea0be361f2
	I0507 19:34:12.552832    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:34:12.552832    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:34:12.552832    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:34:12.552832    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:34:12.552832    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:34:12 GMT
	I0507 19:34:12.553770    6544 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"445","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6447 chars]
	I0507 19:34:12.554810    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000
	I0507 19:34:12.554810    6544 round_trippers.go:469] Request Headers:
	I0507 19:34:12.554928    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:34:12.554928    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:34:12.557226    6544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:34:12.557226    6544 round_trippers.go:577] Response Headers:
	I0507 19:34:12.557226    6544 round_trippers.go:580]     Audit-Id: 640a7abf-bc32-465b-8500-a73a01137c98
	I0507 19:34:12.558060    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:34:12.558060    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:34:12.558060    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:34:12.558060    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:34:12.558060    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:34:12 GMT
	I0507 19:34:12.558564    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"439","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0507 19:34:12.559282    6544 pod_ready.go:102] pod "coredns-7db6d8ff4d-5j966" in "kube-system" namespace has status "Ready":"False"
	I0507 19:34:13.037397    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5j966
	I0507 19:34:13.037681    6544 round_trippers.go:469] Request Headers:
	I0507 19:34:13.037681    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:34:13.037681    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:34:13.049603    6544 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0507 19:34:13.049603    6544 round_trippers.go:577] Response Headers:
	I0507 19:34:13.049603    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:34:13.049603    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:34:13.049603    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:34:13 GMT
	I0507 19:34:13.049603    6544 round_trippers.go:580]     Audit-Id: ac1fc4eb-6890-476a-976b-58b284ae1932
	I0507 19:34:13.049603    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:34:13.049715    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:34:13.049872    6544 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"459","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6578 chars]
	I0507 19:34:13.050112    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000
	I0507 19:34:13.050112    6544 round_trippers.go:469] Request Headers:
	I0507 19:34:13.050112    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:34:13.050112    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:34:13.055810    6544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 19:34:13.055810    6544 round_trippers.go:577] Response Headers:
	I0507 19:34:13.055810    6544 round_trippers.go:580]     Audit-Id: a2589151-88da-4e8d-b222-3728de5cbfe8
	I0507 19:34:13.055810    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:34:13.055810    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:34:13.055810    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:34:13.055810    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:34:13.055810    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:34:13 GMT
	I0507 19:34:13.055810    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"439","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0507 19:34:13.056531    6544 pod_ready.go:92] pod "coredns-7db6d8ff4d-5j966" in "kube-system" namespace has status "Ready":"True"
	I0507 19:34:13.056531    6544 pod_ready.go:81] duration metric: took 2.5228573s for pod "coredns-7db6d8ff4d-5j966" in "kube-system" namespace to be "Ready" ...
	I0507 19:34:13.056531    6544 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-600000" in "kube-system" namespace to be "Ready" ...
	I0507 19:34:13.056531    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-600000
	I0507 19:34:13.056531    6544 round_trippers.go:469] Request Headers:
	I0507 19:34:13.056531    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:34:13.056531    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:34:13.059823    6544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:34:13.059823    6544 round_trippers.go:577] Response Headers:
	I0507 19:34:13.059823    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:34:13.059823    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:34:13.059823    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:34:13 GMT
	I0507 19:34:13.059823    6544 round_trippers.go:580]     Audit-Id: f2e6fd70-2970-4712-bc3e-54aaa0b9cd65
	I0507 19:34:13.059823    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:34:13.059823    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:34:13.060783    6544 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-600000","namespace":"kube-system","uid":"d55601ee-11f4-432c-8170-ecc4d8212782","resourceVersion":"421","creationTimestamp":"2024-05-07T19:33:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.143.74:2379","kubernetes.io/config.hash":"d902475f151631231b80fe38edab39e8","kubernetes.io/config.mirror":"d902475f151631231b80fe38edab39e8","kubernetes.io/config.seen":"2024-05-07T19:33:44.165678627Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6159 chars]
	I0507 19:34:13.061369    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000
	I0507 19:34:13.061400    6544 round_trippers.go:469] Request Headers:
	I0507 19:34:13.061400    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:34:13.061400    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:34:13.063627    6544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:34:13.063627    6544 round_trippers.go:577] Response Headers:
	I0507 19:34:13.063627    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:34:13.063627    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:34:13.063627    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:34:13.063627    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:34:13.063627    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:34:13 GMT
	I0507 19:34:13.064545    6544 round_trippers.go:580]     Audit-Id: 6d85c6e1-444a-443b-b1ce-c555b4f74019
	I0507 19:34:13.065050    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"439","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0507 19:34:13.065177    6544 pod_ready.go:92] pod "etcd-multinode-600000" in "kube-system" namespace has status "Ready":"True"
	I0507 19:34:13.065177    6544 pod_ready.go:81] duration metric: took 8.6459ms for pod "etcd-multinode-600000" in "kube-system" namespace to be "Ready" ...
	I0507 19:34:13.065177    6544 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-600000" in "kube-system" namespace to be "Ready" ...
	I0507 19:34:13.065734    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-600000
	I0507 19:34:13.065734    6544 round_trippers.go:469] Request Headers:
	I0507 19:34:13.065734    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:34:13.065786    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:34:13.068429    6544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:34:13.068429    6544 round_trippers.go:577] Response Headers:
	I0507 19:34:13.068429    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:34:13.068429    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:34:13.068429    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:34:13 GMT
	I0507 19:34:13.068429    6544 round_trippers.go:580]     Audit-Id: 159cc56b-4944-42a7-813c-a34f93af0f3c
	I0507 19:34:13.068429    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:34:13.068429    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:34:13.068429    6544 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-600000","namespace":"kube-system","uid":"c2ba4e1a-3041-4395-a246-9dd28358b95a","resourceVersion":"420","creationTimestamp":"2024-05-07T19:33:44Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.143.74:8443","kubernetes.io/config.hash":"b4a96b44957f27b92ef21190115bc428","kubernetes.io/config.mirror":"b4a96b44957f27b92ef21190115bc428","kubernetes.io/config.seen":"2024-05-07T19:33:44.165672227Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7694 chars]
	I0507 19:34:13.069767    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000
	I0507 19:34:13.069767    6544 round_trippers.go:469] Request Headers:
	I0507 19:34:13.069767    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:34:13.069767    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:34:13.072060    6544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:34:13.072060    6544 round_trippers.go:577] Response Headers:
	I0507 19:34:13.072060    6544 round_trippers.go:580]     Audit-Id: 40d5fa94-c1af-437e-9694-fd74e94d07bf
	I0507 19:34:13.072060    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:34:13.072060    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:34:13.072060    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:34:13.072749    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:34:13.072749    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:34:13 GMT
	I0507 19:34:13.072901    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"439","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0507 19:34:13.073255    6544 pod_ready.go:92] pod "kube-apiserver-multinode-600000" in "kube-system" namespace has status "Ready":"True"
	I0507 19:34:13.073255    6544 pod_ready.go:81] duration metric: took 8.0767ms for pod "kube-apiserver-multinode-600000" in "kube-system" namespace to be "Ready" ...
	I0507 19:34:13.073255    6544 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-600000" in "kube-system" namespace to be "Ready" ...
	I0507 19:34:13.073367    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-600000
	I0507 19:34:13.073367    6544 round_trippers.go:469] Request Headers:
	I0507 19:34:13.073367    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:34:13.073367    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:34:13.076000    6544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:34:13.076000    6544 round_trippers.go:577] Response Headers:
	I0507 19:34:13.076000    6544 round_trippers.go:580]     Audit-Id: b962f0c2-ca64-4764-a9e1-cb18970929d4
	I0507 19:34:13.076000    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:34:13.076000    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:34:13.076000    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:34:13.076000    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:34:13.076000    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:34:13 GMT
	I0507 19:34:13.076552    6544 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-600000","namespace":"kube-system","uid":"b960b526-da40-480d-9a72-9ab8c7f2989a","resourceVersion":"418","creationTimestamp":"2024-05-07T19:33:43Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f5d6aa60dc93b5e562f37ed2236c3022","kubernetes.io/config.mirror":"f5d6aa60dc93b5e562f37ed2236c3022","kubernetes.io/config.seen":"2024-05-07T19:33:37.010155750Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7264 chars]
	I0507 19:34:13.076651    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000
	I0507 19:34:13.076651    6544 round_trippers.go:469] Request Headers:
	I0507 19:34:13.076651    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:34:13.076651    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:34:13.081556    6544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:34:13.081556    6544 round_trippers.go:577] Response Headers:
	I0507 19:34:13.081556    6544 round_trippers.go:580]     Audit-Id: 0592f837-3ac3-4531-b501-ca305bb1e783
	I0507 19:34:13.081556    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:34:13.081556    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:34:13.081556    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:34:13.081556    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:34:13.081556    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:34:13 GMT
	I0507 19:34:13.081556    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"439","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0507 19:34:13.082593    6544 pod_ready.go:92] pod "kube-controller-manager-multinode-600000" in "kube-system" namespace has status "Ready":"True"
	I0507 19:34:13.082636    6544 pod_ready.go:81] duration metric: took 9.3322ms for pod "kube-controller-manager-multinode-600000" in "kube-system" namespace to be "Ready" ...
	I0507 19:34:13.082636    6544 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-c9gw5" in "kube-system" namespace to be "Ready" ...
	I0507 19:34:13.082799    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c9gw5
	I0507 19:34:13.082848    6544 round_trippers.go:469] Request Headers:
	I0507 19:34:13.082848    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:34:13.082848    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:34:13.085165    6544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:34:13.085165    6544 round_trippers.go:577] Response Headers:
	I0507 19:34:13.085165    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:34:13.085165    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:34:13 GMT
	I0507 19:34:13.085165    6544 round_trippers.go:580]     Audit-Id: fbcd6ed7-3214-4ad7-88b6-adade65c2a0a
	I0507 19:34:13.085165    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:34:13.085165    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:34:13.085165    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:34:13.085165    6544 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-c9gw5","generateName":"kube-proxy-","namespace":"kube-system","uid":"9a39807c-6243-4aa2-86f4-8626031c80a6","resourceVersion":"414","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"952e0024-0710-460c-920c-3959ceadbd10","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"952e0024-0710-460c-920c-3959ceadbd10\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5828 chars]
	I0507 19:34:13.086527    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000
	I0507 19:34:13.086575    6544 round_trippers.go:469] Request Headers:
	I0507 19:34:13.086628    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:34:13.086628    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:34:13.088884    6544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:34:13.088884    6544 round_trippers.go:577] Response Headers:
	I0507 19:34:13.088884    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:34:13 GMT
	I0507 19:34:13.088884    6544 round_trippers.go:580]     Audit-Id: 26a20ef2-b270-4d95-b4f5-3f15b86ad638
	I0507 19:34:13.088884    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:34:13.088884    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:34:13.088884    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:34:13.088884    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:34:13.088884    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"439","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0507 19:34:13.089828    6544 pod_ready.go:92] pod "kube-proxy-c9gw5" in "kube-system" namespace has status "Ready":"True"
	I0507 19:34:13.089885    6544 pod_ready.go:81] duration metric: took 7.1271ms for pod "kube-proxy-c9gw5" in "kube-system" namespace to be "Ready" ...
	I0507 19:34:13.089885    6544 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-600000" in "kube-system" namespace to be "Ready" ...
	I0507 19:34:13.241140    6544 request.go:629] Waited for 151.008ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.143.74:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-600000
	I0507 19:34:13.241249    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-600000
	I0507 19:34:13.241249    6544 round_trippers.go:469] Request Headers:
	I0507 19:34:13.241249    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:34:13.241249    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:34:13.244460    6544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:34:13.244576    6544 round_trippers.go:577] Response Headers:
	I0507 19:34:13.244576    6544 round_trippers.go:580]     Audit-Id: 13a68421-3c23-4973-9e2e-d415fab45fb3
	I0507 19:34:13.244576    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:34:13.244576    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:34:13.244576    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:34:13.244576    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:34:13.244576    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:34:13 GMT
	I0507 19:34:13.244751    6544 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-600000","namespace":"kube-system","uid":"ec3ac949-cb83-49be-a908-c93e23135ae8","resourceVersion":"419","creationTimestamp":"2024-05-07T19:33:44Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7c4ee79f6d4f6adb00b636f817445fef","kubernetes.io/config.mirror":"7c4ee79f6d4f6adb00b636f817445fef","kubernetes.io/config.seen":"2024-05-07T19:33:44.165677427Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4994 chars]
	I0507 19:34:13.444023    6544 request.go:629] Waited for 198.5197ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.143.74:8443/api/v1/nodes/multinode-600000
	I0507 19:34:13.444385    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000
	I0507 19:34:13.444385    6544 round_trippers.go:469] Request Headers:
	I0507 19:34:13.444385    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:34:13.444385    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:34:13.449842    6544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 19:34:13.449842    6544 round_trippers.go:577] Response Headers:
	I0507 19:34:13.449842    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:34:13.449842    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:34:13 GMT
	I0507 19:34:13.449945    6544 round_trippers.go:580]     Audit-Id: c1616550-373a-48c6-a792-2947d8a4fe6e
	I0507 19:34:13.449945    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:34:13.449945    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:34:13.449945    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:34:13.449945    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"439","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","fi [truncated 4783 chars]
	I0507 19:34:13.450568    6544 pod_ready.go:92] pod "kube-scheduler-multinode-600000" in "kube-system" namespace has status "Ready":"True"
	I0507 19:34:13.450568    6544 pod_ready.go:81] duration metric: took 360.6587ms for pod "kube-scheduler-multinode-600000" in "kube-system" namespace to be "Ready" ...
	I0507 19:34:13.450568    6544 pod_ready.go:38] duration metric: took 2.9439942s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0507 19:34:13.450568    6544 api_server.go:52] waiting for apiserver process to appear ...
	I0507 19:34:13.460829    6544 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0507 19:34:13.484922    6544 command_runner.go:130] > 2056
	I0507 19:34:13.485388    6544 api_server.go:72] duration metric: took 15.4372225s to wait for apiserver process to appear ...
	I0507 19:34:13.485388    6544 api_server.go:88] waiting for apiserver healthz status ...
	I0507 19:34:13.485450    6544 api_server.go:253] Checking apiserver healthz at https://172.19.143.74:8443/healthz ...
	I0507 19:34:13.494277    6544 api_server.go:279] https://172.19.143.74:8443/healthz returned 200:
	ok
	I0507 19:34:13.494938    6544 round_trippers.go:463] GET https://172.19.143.74:8443/version
	I0507 19:34:13.495015    6544 round_trippers.go:469] Request Headers:
	I0507 19:34:13.495056    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:34:13.495056    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:34:13.496193    6544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0507 19:34:13.496193    6544 round_trippers.go:577] Response Headers:
	I0507 19:34:13.496193    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:34:13.496193    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:34:13.496193    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:34:13.496193    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:34:13.496193    6544 round_trippers.go:580]     Content-Length: 263
	I0507 19:34:13.496193    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:34:13 GMT
	I0507 19:34:13.496193    6544 round_trippers.go:580]     Audit-Id: 87161791-80a2-47ef-adee-0b29314c43ee
	I0507 19:34:13.496193    6544 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.0",
	  "gitCommit": "7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a",
	  "gitTreeState": "clean",
	  "buildDate": "2024-04-17T17:27:03Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0507 19:34:13.496517    6544 api_server.go:141] control plane version: v1.30.0
	I0507 19:34:13.496517    6544 api_server.go:131] duration metric: took 11.1285ms to wait for apiserver health ...
	I0507 19:34:13.496517    6544 system_pods.go:43] waiting for kube-system pods to appear ...
	I0507 19:34:13.647948    6544 request.go:629] Waited for 151.123ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.143.74:8443/api/v1/namespaces/kube-system/pods
	I0507 19:34:13.648059    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/namespaces/kube-system/pods
	I0507 19:34:13.648059    6544 round_trippers.go:469] Request Headers:
	I0507 19:34:13.648059    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:34:13.648059    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:34:13.651840    6544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:34:13.652516    6544 round_trippers.go:577] Response Headers:
	I0507 19:34:13.652516    6544 round_trippers.go:580]     Audit-Id: e024ed16-f9b8-4066-8351-bd6e92a5f5ee
	I0507 19:34:13.652516    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:34:13.652516    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:34:13.652516    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:34:13.652516    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:34:13.652516    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:34:13 GMT
	I0507 19:34:13.654247    6544 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"465"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"459","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56451 chars]
	I0507 19:34:13.657079    6544 system_pods.go:59] 8 kube-system pods found
	I0507 19:34:13.657079    6544 system_pods.go:61] "coredns-7db6d8ff4d-5j966" [d067d438-f4af-42e8-930d-3423a3ac211f] Running
	I0507 19:34:13.657079    6544 system_pods.go:61] "etcd-multinode-600000" [d55601ee-11f4-432c-8170-ecc4d8212782] Running
	I0507 19:34:13.657079    6544 system_pods.go:61] "kindnet-zw4r9" [b5145a4d-38aa-426e-947f-3480e269470e] Running
	I0507 19:34:13.657079    6544 system_pods.go:61] "kube-apiserver-multinode-600000" [c2ba4e1a-3041-4395-a246-9dd28358b95a] Running
	I0507 19:34:13.657079    6544 system_pods.go:61] "kube-controller-manager-multinode-600000" [b960b526-da40-480d-9a72-9ab8c7f2989a] Running
	I0507 19:34:13.657079    6544 system_pods.go:61] "kube-proxy-c9gw5" [9a39807c-6243-4aa2-86f4-8626031c80a6] Running
	I0507 19:34:13.657079    6544 system_pods.go:61] "kube-scheduler-multinode-600000" [ec3ac949-cb83-49be-a908-c93e23135ae8] Running
	I0507 19:34:13.657079    6544 system_pods.go:61] "storage-provisioner" [90142b77-53fb-42e1-94f8-7f8a3c7765ac] Running
	I0507 19:34:13.657079    6544 system_pods.go:74] duration metric: took 160.5513ms to wait for pod list to return data ...
	I0507 19:34:13.657079    6544 default_sa.go:34] waiting for default service account to be created ...
	I0507 19:34:13.849682    6544 request.go:629] Waited for 191.5661ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.143.74:8443/api/v1/namespaces/default/serviceaccounts
	I0507 19:34:13.849682    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/namespaces/default/serviceaccounts
	I0507 19:34:13.849682    6544 round_trippers.go:469] Request Headers:
	I0507 19:34:13.849682    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:34:13.849682    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:34:13.852414    6544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:34:13.852414    6544 round_trippers.go:577] Response Headers:
	I0507 19:34:13.852414    6544 round_trippers.go:580]     Audit-Id: 5f03a7b4-3242-497f-8471-edea8d1ad0ba
	I0507 19:34:13.852414    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:34:13.852414    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:34:13.852414    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:34:13.852414    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:34:13.853012    6544 round_trippers.go:580]     Content-Length: 261
	I0507 19:34:13.853012    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:34:14 GMT
	I0507 19:34:13.853068    6544 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"465"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"c895d506-2b91-4017-a081-00ad98764b6c","resourceVersion":"355","creationTimestamp":"2024-05-07T19:33:57Z"}}]}
	I0507 19:34:13.853498    6544 default_sa.go:45] found service account: "default"
	I0507 19:34:13.853551    6544 default_sa.go:55] duration metric: took 196.4591ms for default service account to be created ...
	I0507 19:34:13.853551    6544 system_pods.go:116] waiting for k8s-apps to be running ...
	I0507 19:34:14.050939    6544 request.go:629] Waited for 197.0776ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.143.74:8443/api/v1/namespaces/kube-system/pods
	I0507 19:34:14.051245    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/namespaces/kube-system/pods
	I0507 19:34:14.051245    6544 round_trippers.go:469] Request Headers:
	I0507 19:34:14.051245    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:34:14.051245    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:34:14.058485    6544 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0507 19:34:14.058485    6544 round_trippers.go:577] Response Headers:
	I0507 19:34:14.058485    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:34:14.058485    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:34:14 GMT
	I0507 19:34:14.058485    6544 round_trippers.go:580]     Audit-Id: 34167b69-c981-40a9-9329-e29a700c0294
	I0507 19:34:14.058485    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:34:14.058485    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:34:14.059003    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:34:14.060067    6544 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"465"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"459","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 56451 chars]
	I0507 19:34:14.063693    6544 system_pods.go:86] 8 kube-system pods found
	I0507 19:34:14.063693    6544 system_pods.go:89] "coredns-7db6d8ff4d-5j966" [d067d438-f4af-42e8-930d-3423a3ac211f] Running
	I0507 19:34:14.063693    6544 system_pods.go:89] "etcd-multinode-600000" [d55601ee-11f4-432c-8170-ecc4d8212782] Running
	I0507 19:34:14.063750    6544 system_pods.go:89] "kindnet-zw4r9" [b5145a4d-38aa-426e-947f-3480e269470e] Running
	I0507 19:34:14.063750    6544 system_pods.go:89] "kube-apiserver-multinode-600000" [c2ba4e1a-3041-4395-a246-9dd28358b95a] Running
	I0507 19:34:14.063750    6544 system_pods.go:89] "kube-controller-manager-multinode-600000" [b960b526-da40-480d-9a72-9ab8c7f2989a] Running
	I0507 19:34:14.063750    6544 system_pods.go:89] "kube-proxy-c9gw5" [9a39807c-6243-4aa2-86f4-8626031c80a6] Running
	I0507 19:34:14.063750    6544 system_pods.go:89] "kube-scheduler-multinode-600000" [ec3ac949-cb83-49be-a908-c93e23135ae8] Running
	I0507 19:34:14.063750    6544 system_pods.go:89] "storage-provisioner" [90142b77-53fb-42e1-94f8-7f8a3c7765ac] Running
	I0507 19:34:14.063750    6544 system_pods.go:126] duration metric: took 210.1446ms to wait for k8s-apps to be running ...
	I0507 19:34:14.063824    6544 system_svc.go:44] waiting for kubelet service to be running ....
	I0507 19:34:14.073113    6544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0507 19:34:14.103201    6544 system_svc.go:56] duration metric: took 39.347ms WaitForService to wait for kubelet
	I0507 19:34:14.103259    6544 kubeadm.go:576] duration metric: took 16.0550526s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0507 19:34:14.103259    6544 node_conditions.go:102] verifying NodePressure condition ...
	I0507 19:34:14.240168    6544 request.go:629] Waited for 136.7763ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.143.74:8443/api/v1/nodes
	I0507 19:34:14.240388    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes
	I0507 19:34:14.240388    6544 round_trippers.go:469] Request Headers:
	I0507 19:34:14.240482    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:34:14.240525    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:34:14.243162    6544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:34:14.243694    6544 round_trippers.go:577] Response Headers:
	I0507 19:34:14.243694    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:34:14.243694    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:34:14.243694    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:34:14.243694    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:34:14 GMT
	I0507 19:34:14.243694    6544 round_trippers.go:580]     Audit-Id: a17919ab-4cc4-4331-843e-d17b01293da2
	I0507 19:34:14.243787    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:34:14.244026    6544 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"465"},"items":[{"metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"439","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 4836 chars]
	I0507 19:34:14.244727    6544 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0507 19:34:14.244727    6544 node_conditions.go:123] node cpu capacity is 2
	I0507 19:34:14.244727    6544 node_conditions.go:105] duration metric: took 141.459ms to run NodePressure ...
	I0507 19:34:14.244727    6544 start.go:240] waiting for startup goroutines ...
	I0507 19:34:14.244727    6544 start.go:245] waiting for cluster config update ...
	I0507 19:34:14.244727    6544 start.go:254] writing updated cluster config ...
	I0507 19:34:14.249348    6544 out.go:177] 
	I0507 19:34:14.252075    6544 config.go:182] Loaded profile config "ha-210800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 19:34:14.258839    6544 config.go:182] Loaded profile config "multinode-600000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 19:34:14.259868    6544 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-600000\config.json ...
	I0507 19:34:14.264845    6544 out.go:177] * Starting "multinode-600000-m02" worker node in "multinode-600000" cluster
	I0507 19:34:14.267846    6544 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0507 19:34:14.267846    6544 cache.go:56] Caching tarball of preloaded images
	I0507 19:34:14.267846    6544 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0507 19:34:14.267846    6544 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0507 19:34:14.267846    6544 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-600000\config.json ...
	I0507 19:34:14.271857    6544 start.go:360] acquireMachinesLock for multinode-600000-m02: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 19:34:14.271857    6544 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-600000-m02"
	I0507 19:34:14.271857    6544 start.go:93] Provisioning new machine with config: &{Name:multinode-600000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{
KubernetesVersion:v1.30.0 ClusterName:multinode-600000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.143.74 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDis
ks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0507 19:34:14.271857    6544 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0507 19:34:14.274844    6544 out.go:204] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0507 19:34:14.274844    6544 start.go:159] libmachine.API.Create for "multinode-600000" (driver="hyperv")
	I0507 19:34:14.274844    6544 client.go:168] LocalClient.Create starting
	I0507 19:34:14.274844    6544 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem
	I0507 19:34:14.274844    6544 main.go:141] libmachine: Decoding PEM data...
	I0507 19:34:14.274844    6544 main.go:141] libmachine: Parsing certificate...
	I0507 19:34:14.274844    6544 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem
	I0507 19:34:14.275856    6544 main.go:141] libmachine: Decoding PEM data...
	I0507 19:34:14.275856    6544 main.go:141] libmachine: Parsing certificate...
	I0507 19:34:14.275856    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0507 19:34:16.004886    6544 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0507 19:34:16.004986    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:34:16.004986    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0507 19:34:17.565472    6544 main.go:141] libmachine: [stdout =====>] : False
	
	I0507 19:34:17.565472    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:34:17.565472    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0507 19:34:18.898235    6544 main.go:141] libmachine: [stdout =====>] : True
	
	I0507 19:34:18.898235    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:34:18.898327    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0507 19:34:22.157436    6544 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0507 19:34:22.157436    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:34:22.160119    6544 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube5/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.33.0-1714498396-18779-amd64.iso...
	I0507 19:34:22.492015    6544 main.go:141] libmachine: Creating SSH key...
	I0507 19:34:22.716957    6544 main.go:141] libmachine: Creating VM...
	I0507 19:34:22.716957    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0507 19:34:25.333356    6544 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0507 19:34:25.333356    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:34:25.333928    6544 main.go:141] libmachine: Using switch "Default Switch"
	I0507 19:34:25.333928    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0507 19:34:26.924295    6544 main.go:141] libmachine: [stdout =====>] : True
	
	I0507 19:34:26.924295    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:34:26.924365    6544 main.go:141] libmachine: Creating VHD
	I0507 19:34:26.924365    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-600000-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0507 19:34:30.445630    6544 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube5
	Path                    : C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-600000-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 8683EFC3-0A5D-4B29-AEB3-0396504DC773
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0507 19:34:30.445630    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:34:30.445630    6544 main.go:141] libmachine: Writing magic tar header
	I0507 19:34:30.446062    6544 main.go:141] libmachine: Writing SSH key tar header
	I0507 19:34:30.453941    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-600000-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-600000-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0507 19:34:33.424759    6544 main.go:141] libmachine: [stdout =====>] : 
	I0507 19:34:33.424759    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:34:33.424926    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-600000-m02\disk.vhd' -SizeBytes 20000MB
	I0507 19:34:35.779074    6544 main.go:141] libmachine: [stdout =====>] : 
	I0507 19:34:35.780113    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:34:35.780186    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-600000-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-600000-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0507 19:34:39.131218    6544 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%!)(MISSING) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-600000-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0507 19:34:39.131312    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:34:39.131385    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-600000-m02 -DynamicMemoryEnabled $false
	I0507 19:34:41.204786    6544 main.go:141] libmachine: [stdout =====>] : 
	I0507 19:34:41.205461    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:34:41.205461    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-600000-m02 -Count 2
	I0507 19:34:43.195824    6544 main.go:141] libmachine: [stdout =====>] : 
	I0507 19:34:43.195824    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:34:43.196847    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-600000-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-600000-m02\boot2docker.iso'
	I0507 19:34:45.501707    6544 main.go:141] libmachine: [stdout =====>] : 
	I0507 19:34:45.501707    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:34:45.502683    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-600000-m02 -Path 'C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-600000-m02\disk.vhd'
	I0507 19:34:47.926873    6544 main.go:141] libmachine: [stdout =====>] : 
	I0507 19:34:47.926873    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:34:47.926873    6544 main.go:141] libmachine: Starting VM...
	I0507 19:34:47.926873    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-600000-m02
	I0507 19:34:50.699164    6544 main.go:141] libmachine: [stdout =====>] : 
	I0507 19:34:50.699164    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:34:50.699455    6544 main.go:141] libmachine: Waiting for host to start...
	I0507 19:34:50.699455    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000-m02 ).state
	I0507 19:34:52.727746    6544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:34:52.727746    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:34:52.728475    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 19:34:55.040383    6544 main.go:141] libmachine: [stdout =====>] : 
	I0507 19:34:55.040421    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:34:56.048850    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000-m02 ).state
	I0507 19:34:58.042947    6544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:34:58.043313    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:34:58.043415    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 19:35:00.329462    6544 main.go:141] libmachine: [stdout =====>] : 
	I0507 19:35:00.329462    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:35:01.340787    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000-m02 ).state
	I0507 19:35:03.365472    6544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:35:03.365522    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:35:03.365522    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 19:35:05.670728    6544 main.go:141] libmachine: [stdout =====>] : 
	I0507 19:35:05.670728    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:35:06.671647    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000-m02 ).state
	I0507 19:35:08.721979    6544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:35:08.721979    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:35:08.722057    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 19:35:10.995371    6544 main.go:141] libmachine: [stdout =====>] : 
	I0507 19:35:10.995469    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:35:12.008084    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000-m02 ).state
	I0507 19:35:14.040691    6544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:35:14.040691    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:35:14.040691    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 19:35:16.385120    6544 main.go:141] libmachine: [stdout =====>] : 172.19.143.144
	
	I0507 19:35:16.385120    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:35:16.385608    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000-m02 ).state
	I0507 19:35:18.338423    6544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:35:18.338423    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:35:18.338423    6544 machine.go:94] provisionDockerMachine start ...
	I0507 19:35:18.338423    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000-m02 ).state
	I0507 19:35:20.301195    6544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:35:20.302174    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:35:20.302252    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 19:35:22.582584    6544 main.go:141] libmachine: [stdout =====>] : 172.19.143.144
	
	I0507 19:35:22.582584    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:35:22.586618    6544 main.go:141] libmachine: Using SSH client type: native
	I0507 19:35:22.596859    6544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.143.144 22 <nil> <nil>}
	I0507 19:35:22.596859    6544 main.go:141] libmachine: About to run SSH command:
	hostname
	I0507 19:35:22.722594    6544 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0507 19:35:22.722594    6544 buildroot.go:166] provisioning hostname "multinode-600000-m02"
	I0507 19:35:22.722700    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000-m02 ).state
	I0507 19:35:24.654430    6544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:35:24.654736    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:35:24.654736    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 19:35:26.918085    6544 main.go:141] libmachine: [stdout =====>] : 172.19.143.144
	
	I0507 19:35:26.918188    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:35:26.921848    6544 main.go:141] libmachine: Using SSH client type: native
	I0507 19:35:26.922448    6544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.143.144 22 <nil> <nil>}
	I0507 19:35:26.922448    6544 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-600000-m02 && echo "multinode-600000-m02" | sudo tee /etc/hostname
	I0507 19:35:27.085346    6544 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-600000-m02
	
	I0507 19:35:27.085425    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000-m02 ).state
	I0507 19:35:28.976680    6544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:35:28.977014    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:35:28.977098    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 19:35:31.184180    6544 main.go:141] libmachine: [stdout =====>] : 172.19.143.144
	
	I0507 19:35:31.184180    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:35:31.188739    6544 main.go:141] libmachine: Using SSH client type: native
	I0507 19:35:31.189266    6544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.143.144 22 <nil> <nil>}
	I0507 19:35:31.189266    6544 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-600000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-600000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-600000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0507 19:35:31.323822    6544 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0507 19:35:31.323977    6544 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0507 19:35:31.323977    6544 buildroot.go:174] setting up certificates
	I0507 19:35:31.324140    6544 provision.go:84] configureAuth start
	I0507 19:35:31.324285    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000-m02 ).state
	I0507 19:35:33.217399    6544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:35:33.217399    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:35:33.217802    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 19:35:35.466651    6544 main.go:141] libmachine: [stdout =====>] : 172.19.143.144
	
	I0507 19:35:35.466651    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:35:35.466651    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000-m02 ).state
	I0507 19:35:37.366950    6544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:35:37.366950    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:35:37.367040    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 19:35:39.619950    6544 main.go:141] libmachine: [stdout =====>] : 172.19.143.144
	
	I0507 19:35:39.620022    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:35:39.620130    6544 provision.go:143] copyHostCerts
	I0507 19:35:39.620168    6544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0507 19:35:39.620168    6544 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0507 19:35:39.620168    6544 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0507 19:35:39.620717    6544 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0507 19:35:39.621176    6544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0507 19:35:39.621176    6544 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0507 19:35:39.621176    6544 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0507 19:35:39.621918    6544 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0507 19:35:39.622838    6544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0507 19:35:39.623041    6544 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0507 19:35:39.623041    6544 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0507 19:35:39.623361    6544 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0507 19:35:39.624220    6544 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-600000-m02 san=[127.0.0.1 172.19.143.144 localhost minikube multinode-600000-m02]
	I0507 19:35:39.715389    6544 provision.go:177] copyRemoteCerts
	I0507 19:35:39.722938    6544 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0507 19:35:39.722938    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000-m02 ).state
	I0507 19:35:41.587969    6544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:35:41.588794    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:35:41.588883    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 19:35:43.886337    6544 main.go:141] libmachine: [stdout =====>] : 172.19.143.144
	
	I0507 19:35:43.886337    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:35:43.887055    6544 sshutil.go:53] new ssh client: &{IP:172.19.143.144 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-600000-m02\id_rsa Username:docker}
	I0507 19:35:43.984135    6544 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.2608464s)
	I0507 19:35:43.984135    6544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0507 19:35:43.984135    6544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0507 19:35:44.029679    6544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0507 19:35:44.030157    6544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0507 19:35:44.076764    6544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0507 19:35:44.076764    6544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0507 19:35:44.130975    6544 provision.go:87] duration metric: took 12.8059946s to configureAuth
	I0507 19:35:44.130975    6544 buildroot.go:189] setting minikube options for container-runtime
	I0507 19:35:44.131525    6544 config.go:182] Loaded profile config "multinode-600000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 19:35:44.131704    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000-m02 ).state
	I0507 19:35:46.011383    6544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:35:46.011383    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:35:46.011383    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 19:35:48.291130    6544 main.go:141] libmachine: [stdout =====>] : 172.19.143.144
	
	I0507 19:35:48.291407    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:35:48.294665    6544 main.go:141] libmachine: Using SSH client type: native
	I0507 19:35:48.295266    6544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.143.144 22 <nil> <nil>}
	I0507 19:35:48.295266    6544 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0507 19:35:48.425975    6544 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0507 19:35:48.426085    6544 buildroot.go:70] root file system type: tmpfs
	I0507 19:35:48.426345    6544 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0507 19:35:48.426493    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000-m02 ).state
	I0507 19:35:50.318024    6544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:35:50.318193    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:35:50.318193    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 19:35:52.566262    6544 main.go:141] libmachine: [stdout =====>] : 172.19.143.144
	
	I0507 19:35:52.566262    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:35:52.569973    6544 main.go:141] libmachine: Using SSH client type: native
	I0507 19:35:52.570041    6544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.143.144 22 <nil> <nil>}
	I0507 19:35:52.570041    6544 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.19.143.74"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0507 19:35:52.715130    6544 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.19.143.74
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0507 19:35:52.715216    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000-m02 ).state
	I0507 19:35:54.593240    6544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:35:54.593240    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:35:54.594615    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 19:35:56.871824    6544 main.go:141] libmachine: [stdout =====>] : 172.19.143.144
	
	I0507 19:35:56.871824    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:35:56.876106    6544 main.go:141] libmachine: Using SSH client type: native
	I0507 19:35:56.876474    6544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.143.144 22 <nil> <nil>}
	I0507 19:35:56.876474    6544 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0507 19:35:58.944895    6544 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0507 19:35:58.944895    6544 machine.go:97] duration metric: took 40.6038087s to provisionDockerMachine
	I0507 19:35:58.944895    6544 client.go:171] duration metric: took 1m44.6631499s to LocalClient.Create
	I0507 19:35:58.944895    6544 start.go:167] duration metric: took 1m44.6631499s to libmachine.API.Create "multinode-600000"
	I0507 19:35:58.944895    6544 start.go:293] postStartSetup for "multinode-600000-m02" (driver="hyperv")
	I0507 19:35:58.944895    6544 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0507 19:35:58.954608    6544 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0507 19:35:58.954608    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000-m02 ).state
	I0507 19:36:00.839055    6544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:36:00.839664    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:36:00.839664    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 19:36:03.133459    6544 main.go:141] libmachine: [stdout =====>] : 172.19.143.144
	
	I0507 19:36:03.133459    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:36:03.133459    6544 sshutil.go:53] new ssh client: &{IP:172.19.143.144 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-600000-m02\id_rsa Username:docker}
	I0507 19:36:03.240197    6544 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.2852143s)
	I0507 19:36:03.250913    6544 ssh_runner.go:195] Run: cat /etc/os-release
	I0507 19:36:03.257730    6544 command_runner.go:130] > NAME=Buildroot
	I0507 19:36:03.257805    6544 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0507 19:36:03.257805    6544 command_runner.go:130] > ID=buildroot
	I0507 19:36:03.257805    6544 command_runner.go:130] > VERSION_ID=2023.02.9
	I0507 19:36:03.257805    6544 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0507 19:36:03.257893    6544 info.go:137] Remote host: Buildroot 2023.02.9
	I0507 19:36:03.258011    6544 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0507 19:36:03.258462    6544 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0507 19:36:03.259468    6544 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem -> 99922.pem in /etc/ssl/certs
	I0507 19:36:03.259601    6544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem -> /etc/ssl/certs/99922.pem
	I0507 19:36:03.271027    6544 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0507 19:36:03.288285    6544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem --> /etc/ssl/certs/99922.pem (1708 bytes)
	I0507 19:36:03.337336    6544 start.go:296] duration metric: took 4.392085s for postStartSetup
	I0507 19:36:03.340024    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000-m02 ).state
	I0507 19:36:05.214422    6544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:36:05.214422    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:36:05.214422    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 19:36:07.458174    6544 main.go:141] libmachine: [stdout =====>] : 172.19.143.144
	
	I0507 19:36:07.458174    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:36:07.458337    6544 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-600000\config.json ...
	I0507 19:36:07.460094    6544 start.go:128] duration metric: took 1m53.1807793s to createHost
	I0507 19:36:07.460249    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000-m02 ).state
	I0507 19:36:09.348093    6544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:36:09.348453    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:36:09.348453    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 19:36:11.580322    6544 main.go:141] libmachine: [stdout =====>] : 172.19.143.144
	
	I0507 19:36:11.580322    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:36:11.584076    6544 main.go:141] libmachine: Using SSH client type: native
	I0507 19:36:11.584674    6544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.143.144 22 <nil> <nil>}
	I0507 19:36:11.584742    6544 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0507 19:36:11.714013    6544 main.go:141] libmachine: SSH cmd err, output: <nil>: 1715110571.952419668
	
	I0507 19:36:11.714013    6544 fix.go:216] guest clock: 1715110571.952419668
	I0507 19:36:11.714013    6544 fix.go:229] Guest: 2024-05-07 19:36:11.952419668 +0000 UTC Remote: 2024-05-07 19:36:07.4601816 +0000 UTC m=+311.520352001 (delta=4.492238068s)
	I0507 19:36:11.714578    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000-m02 ).state
	I0507 19:36:13.580845    6544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:36:13.580845    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:36:13.580845    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 19:36:15.844936    6544 main.go:141] libmachine: [stdout =====>] : 172.19.143.144
	
	I0507 19:36:15.844936    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:36:15.848925    6544 main.go:141] libmachine: Using SSH client type: native
	I0507 19:36:15.848925    6544 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.143.144 22 <nil> <nil>}
	I0507 19:36:15.848925    6544 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1715110571
	I0507 19:36:15.986870    6544 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue May  7 19:36:11 UTC 2024
	
	I0507 19:36:15.986931    6544 fix.go:236] clock set: Tue May  7 19:36:11 UTC 2024
	 (err=<nil>)
	I0507 19:36:15.986931    6544 start.go:83] releasing machines lock for "multinode-600000-m02", held for 2m1.7070595s
	I0507 19:36:15.987164    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000-m02 ).state
	I0507 19:36:17.852545    6544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:36:17.852545    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:36:17.852545    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 19:36:20.087753    6544 main.go:141] libmachine: [stdout =====>] : 172.19.143.144
	
	I0507 19:36:20.087753    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:36:20.090890    6544 out.go:177] * Found network options:
	I0507 19:36:20.094075    6544 out.go:177]   - NO_PROXY=172.19.143.74
	W0507 19:36:20.096635    6544 proxy.go:119] fail to check proxy env: Error ip not in block
	I0507 19:36:20.099105    6544 out.go:177]   - NO_PROXY=172.19.143.74
	W0507 19:36:20.101373    6544 proxy.go:119] fail to check proxy env: Error ip not in block
	W0507 19:36:20.102190    6544 proxy.go:119] fail to check proxy env: Error ip not in block
	I0507 19:36:20.104278    6544 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0507 19:36:20.104278    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000-m02 ).state
	I0507 19:36:20.111274    6544 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0507 19:36:20.111274    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000-m02 ).state
	I0507 19:36:22.093051    6544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:36:22.093051    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:36:22.093051    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 19:36:22.094043    6544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:36:22.094111    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:36:22.094186    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 19:36:24.397660    6544 main.go:141] libmachine: [stdout =====>] : 172.19.143.144
	
	I0507 19:36:24.397917    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:36:24.398095    6544 sshutil.go:53] new ssh client: &{IP:172.19.143.144 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-600000-m02\id_rsa Username:docker}
	I0507 19:36:24.421173    6544 main.go:141] libmachine: [stdout =====>] : 172.19.143.144
	
	I0507 19:36:24.421173    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:36:24.421990    6544 sshutil.go:53] new ssh client: &{IP:172.19.143.144 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-600000-m02\id_rsa Username:docker}
	I0507 19:36:24.499073    6544 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0507 19:36:24.499578    6544 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.3880189s)
	W0507 19:36:24.499688    6544 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0507 19:36:24.509990    6544 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0507 19:36:24.607613    6544 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0507 19:36:24.607613    6544 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0507 19:36:24.607613    6544 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.5030419s)
	I0507 19:36:24.607613    6544 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0507 19:36:24.607752    6544 start.go:494] detecting cgroup driver to use...
	I0507 19:36:24.608011    6544 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0507 19:36:24.639119    6544 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0507 19:36:24.646696    6544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0507 19:36:24.673808    6544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0507 19:36:24.691677    6544 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0507 19:36:24.699678    6544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0507 19:36:24.726968    6544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0507 19:36:24.753973    6544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0507 19:36:24.780962    6544 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0507 19:36:24.807015    6544 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0507 19:36:24.832273    6544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0507 19:36:24.861275    6544 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0507 19:36:24.887343    6544 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0507 19:36:24.912465    6544 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0507 19:36:24.929288    6544 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0507 19:36:24.937163    6544 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0507 19:36:24.966136    6544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 19:36:25.142850    6544 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0507 19:36:25.171769    6544 start.go:494] detecting cgroup driver to use...
	I0507 19:36:25.179641    6544 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0507 19:36:25.202846    6544 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0507 19:36:25.202930    6544 command_runner.go:130] > [Unit]
	I0507 19:36:25.202930    6544 command_runner.go:130] > Description=Docker Application Container Engine
	I0507 19:36:25.202930    6544 command_runner.go:130] > Documentation=https://docs.docker.com
	I0507 19:36:25.202930    6544 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0507 19:36:25.203017    6544 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0507 19:36:25.203017    6544 command_runner.go:130] > StartLimitBurst=3
	I0507 19:36:25.203017    6544 command_runner.go:130] > StartLimitIntervalSec=60
	I0507 19:36:25.203017    6544 command_runner.go:130] > [Service]
	I0507 19:36:25.203017    6544 command_runner.go:130] > Type=notify
	I0507 19:36:25.203017    6544 command_runner.go:130] > Restart=on-failure
	I0507 19:36:25.203017    6544 command_runner.go:130] > Environment=NO_PROXY=172.19.143.74
	I0507 19:36:25.203101    6544 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0507 19:36:25.203101    6544 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0507 19:36:25.203101    6544 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0507 19:36:25.203101    6544 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0507 19:36:25.203101    6544 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0507 19:36:25.203101    6544 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0507 19:36:25.203193    6544 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0507 19:36:25.203241    6544 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0507 19:36:25.203256    6544 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0507 19:36:25.203280    6544 command_runner.go:130] > ExecStart=
	I0507 19:36:25.203280    6544 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0507 19:36:25.203359    6544 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0507 19:36:25.203359    6544 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0507 19:36:25.203359    6544 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0507 19:36:25.203359    6544 command_runner.go:130] > LimitNOFILE=infinity
	I0507 19:36:25.203432    6544 command_runner.go:130] > LimitNPROC=infinity
	I0507 19:36:25.203432    6544 command_runner.go:130] > LimitCORE=infinity
	I0507 19:36:25.203432    6544 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0507 19:36:25.203432    6544 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0507 19:36:25.203498    6544 command_runner.go:130] > TasksMax=infinity
	I0507 19:36:25.203498    6544 command_runner.go:130] > TimeoutStartSec=0
	I0507 19:36:25.203521    6544 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0507 19:36:25.203521    6544 command_runner.go:130] > Delegate=yes
	I0507 19:36:25.203572    6544 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0507 19:36:25.203591    6544 command_runner.go:130] > KillMode=process
	I0507 19:36:25.203591    6544 command_runner.go:130] > [Install]
	I0507 19:36:25.203591    6544 command_runner.go:130] > WantedBy=multi-user.target
	I0507 19:36:25.212205    6544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0507 19:36:25.241596    6544 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0507 19:36:25.283172    6544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0507 19:36:25.313456    6544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0507 19:36:25.348049    6544 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0507 19:36:25.403327    6544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0507 19:36:25.424707    6544 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0507 19:36:25.456399    6544 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0507 19:36:25.465745    6544 ssh_runner.go:195] Run: which cri-dockerd
	I0507 19:36:25.472317    6544 command_runner.go:130] > /usr/bin/cri-dockerd
	I0507 19:36:25.480368    6544 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0507 19:36:25.497090    6544 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0507 19:36:25.537962    6544 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0507 19:36:25.736306    6544 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0507 19:36:25.906316    6544 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0507 19:36:25.906466    6544 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0507 19:36:25.943571    6544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 19:36:26.137906    6544 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0507 19:36:28.605628    6544 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.4675615s)
	I0507 19:36:28.615078    6544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0507 19:36:28.649711    6544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0507 19:36:28.679851    6544 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0507 19:36:28.867691    6544 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0507 19:36:29.042573    6544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 19:36:29.213105    6544 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0507 19:36:29.248239    6544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0507 19:36:29.278457    6544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 19:36:29.455561    6544 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0507 19:36:29.551240    6544 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0507 19:36:29.560308    6544 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0507 19:36:29.569971    6544 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0507 19:36:29.569971    6544 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0507 19:36:29.569971    6544 command_runner.go:130] > Device: 0,22	Inode: 885         Links: 1
	I0507 19:36:29.569971    6544 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0507 19:36:29.569971    6544 command_runner.go:130] > Access: 2024-05-07 19:36:29.717203959 +0000
	I0507 19:36:29.569971    6544 command_runner.go:130] > Modify: 2024-05-07 19:36:29.717203959 +0000
	I0507 19:36:29.569971    6544 command_runner.go:130] > Change: 2024-05-07 19:36:29.720204163 +0000
	I0507 19:36:29.569971    6544 command_runner.go:130] >  Birth: -
	I0507 19:36:29.569971    6544 start.go:562] Will wait 60s for crictl version
	I0507 19:36:29.578432    6544 ssh_runner.go:195] Run: which crictl
	I0507 19:36:29.584183    6544 command_runner.go:130] > /usr/bin/crictl
	I0507 19:36:29.592256    6544 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0507 19:36:29.638268    6544 command_runner.go:130] > Version:  0.1.0
	I0507 19:36:29.638268    6544 command_runner.go:130] > RuntimeName:  docker
	I0507 19:36:29.638268    6544 command_runner.go:130] > RuntimeVersion:  26.0.2
	I0507 19:36:29.638268    6544 command_runner.go:130] > RuntimeApiVersion:  v1
	I0507 19:36:29.640189    6544 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0507 19:36:29.647276    6544 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0507 19:36:29.675920    6544 command_runner.go:130] > 26.0.2
	I0507 19:36:29.684282    6544 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0507 19:36:29.712336    6544 command_runner.go:130] > 26.0.2
	I0507 19:36:29.716190    6544 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0507 19:36:29.718853    6544 out.go:177]   - env NO_PROXY=172.19.143.74
	I0507 19:36:29.721098    6544 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0507 19:36:29.724493    6544 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0507 19:36:29.724493    6544 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0507 19:36:29.724493    6544 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0507 19:36:29.724493    6544 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:a3:a5:4f Flags:up|broadcast|multicast|running}
	I0507 19:36:29.727536    6544 ip.go:210] interface addr: fe80::1edb:f5fd:c218:d8d2/64
	I0507 19:36:29.727536    6544 ip.go:210] interface addr: 172.19.128.1/20
	I0507 19:36:29.735983    6544 ssh_runner.go:195] Run: grep 172.19.128.1	host.minikube.internal$ /etc/hosts
	I0507 19:36:29.740859    6544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.128.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0507 19:36:29.762690    6544 mustload.go:65] Loading cluster: multinode-600000
	I0507 19:36:29.763347    6544 config.go:182] Loaded profile config "multinode-600000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 19:36:29.763850    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000 ).state
	I0507 19:36:31.621335    6544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:36:31.621335    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:36:31.621335    6544 host.go:66] Checking if "multinode-600000" exists ...
	I0507 19:36:31.622115    6544 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-600000 for IP: 172.19.143.144
	I0507 19:36:31.622115    6544 certs.go:194] generating shared ca certs ...
	I0507 19:36:31.622115    6544 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 19:36:31.622785    6544 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0507 19:36:31.623223    6544 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0507 19:36:31.623448    6544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0507 19:36:31.623583    6544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0507 19:36:31.623583    6544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0507 19:36:31.623583    6544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0507 19:36:31.624641    6544 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\9992.pem (1338 bytes)
	W0507 19:36:31.624784    6544 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\9992_empty.pem, impossibly tiny 0 bytes
	I0507 19:36:31.624784    6544 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0507 19:36:31.624784    6544 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0507 19:36:31.624784    6544 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0507 19:36:31.625470    6544 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0507 19:36:31.625788    6544 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem (1708 bytes)
	I0507 19:36:31.625962    6544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem -> /usr/share/ca-certificates/99922.pem
	I0507 19:36:31.626103    6544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0507 19:36:31.626151    6544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\9992.pem -> /usr/share/ca-certificates/9992.pem
	I0507 19:36:31.626151    6544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0507 19:36:31.677356    6544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0507 19:36:31.722930    6544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0507 19:36:31.769772    6544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0507 19:36:31.810077    6544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem --> /usr/share/ca-certificates/99922.pem (1708 bytes)
	I0507 19:36:31.850936    6544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0507 19:36:31.891049    6544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\9992.pem --> /usr/share/ca-certificates/9992.pem (1338 bytes)
	I0507 19:36:31.942738    6544 ssh_runner.go:195] Run: openssl version
	I0507 19:36:31.949845    6544 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0507 19:36:31.962758    6544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/99922.pem && ln -fs /usr/share/ca-certificates/99922.pem /etc/ssl/certs/99922.pem"
	I0507 19:36:31.991555    6544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/99922.pem
	I0507 19:36:31.998132    6544 command_runner.go:130] > -rw-r--r-- 1 root root 1708 May  7 18:15 /usr/share/ca-certificates/99922.pem
	I0507 19:36:31.998132    6544 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  7 18:15 /usr/share/ca-certificates/99922.pem
	I0507 19:36:32.006273    6544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/99922.pem
	I0507 19:36:32.014914    6544 command_runner.go:130] > 3ec20f2e
	I0507 19:36:32.023431    6544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/99922.pem /etc/ssl/certs/3ec20f2e.0"
	I0507 19:36:32.055382    6544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0507 19:36:32.081700    6544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0507 19:36:32.088565    6544 command_runner.go:130] > -rw-r--r-- 1 root root 1111 May  7 18:01 /usr/share/ca-certificates/minikubeCA.pem
	I0507 19:36:32.089065    6544 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  7 18:01 /usr/share/ca-certificates/minikubeCA.pem
	I0507 19:36:32.097298    6544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0507 19:36:32.105154    6544 command_runner.go:130] > b5213941
	I0507 19:36:32.113244    6544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0507 19:36:32.140378    6544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9992.pem && ln -fs /usr/share/ca-certificates/9992.pem /etc/ssl/certs/9992.pem"
	I0507 19:36:32.168032    6544 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9992.pem
	I0507 19:36:32.173767    6544 command_runner.go:130] > -rw-r--r-- 1 root root 1338 May  7 18:15 /usr/share/ca-certificates/9992.pem
	I0507 19:36:32.174819    6544 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  7 18:15 /usr/share/ca-certificates/9992.pem
	I0507 19:36:32.183103    6544 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9992.pem
	I0507 19:36:32.191826    6544 command_runner.go:130] > 51391683
	I0507 19:36:32.200466    6544 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9992.pem /etc/ssl/certs/51391683.0"
	I0507 19:36:32.226826    6544 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0507 19:36:32.233395    6544 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0507 19:36:32.233395    6544 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0507 19:36:32.233395    6544 kubeadm.go:928] updating node {m02 172.19.143.144 8443 v1.30.0 docker false true} ...
	I0507 19:36:32.233932    6544 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-600000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.143.144
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-600000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0507 19:36:32.241836    6544 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0507 19:36:32.257980    6544 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	I0507 19:36:32.257980    6544 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.30.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.30.0': No such file or directory
	
	Initiating transfer...
	I0507 19:36:32.265932    6544 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.30.0
	I0507 19:36:32.283401    6544 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubectl.sha256
	I0507 19:36:32.283401    6544 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubelet.sha256
	I0507 19:36:32.283401    6544 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/linux/amd64/kubeadm.sha256
	I0507 19:36:32.284052    6544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl -> /var/lib/minikube/binaries/v1.30.0/kubectl
	I0507 19:36:32.284052    6544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm -> /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0507 19:36:32.294273    6544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0507 19:36:32.294989    6544 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl
	I0507 19:36:32.296199    6544 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm
	I0507 19:36:32.318773    6544 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet -> /var/lib/minikube/binaries/v1.30.0/kubelet
	I0507 19:36:32.318773    6544 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0507 19:36:32.318773    6544 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubectl': No such file or directory
	I0507 19:36:32.318773    6544 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0507 19:36:32.318773    6544 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubeadm': No such file or directory
	I0507 19:36:32.318773    6544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubectl --> /var/lib/minikube/binaries/v1.30.0/kubectl (51454104 bytes)
	I0507 19:36:32.318773    6544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubeadm --> /var/lib/minikube/binaries/v1.30.0/kubeadm (50249880 bytes)
	I0507 19:36:32.330478    6544 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet
	I0507 19:36:32.392335    6544 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0507 19:36:32.397174    6544 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.30.0/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.30.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.30.0/kubelet': No such file or directory
	I0507 19:36:32.397174    6544 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\linux\amd64\v1.30.0/kubelet --> /var/lib/minikube/binaries/v1.30.0/kubelet (100100024 bytes)
	I0507 19:36:33.566589    6544 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0507 19:36:33.584388    6544 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0507 19:36:33.611335    6544 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0507 19:36:33.649392    6544 ssh_runner.go:195] Run: grep 172.19.143.74	control-plane.minikube.internal$ /etc/hosts
	I0507 19:36:33.654903    6544 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.143.74	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0507 19:36:33.682579    6544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 19:36:33.863760    6544 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0507 19:36:33.894257    6544 host.go:66] Checking if "multinode-600000" exists ...
	I0507 19:36:33.895297    6544 start.go:316] joinCluster: &{Name:multinode-600000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
0 ClusterName:multinode-600000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.143.74 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.143.144 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0507 19:36:33.895297    6544 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0507 19:36:33.895297    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000 ).state
	I0507 19:36:35.800860    6544 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:36:35.800860    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:36:35.800860    6544 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000 ).networkadapters[0]).ipaddresses[0]
	I0507 19:36:38.069785    6544 main.go:141] libmachine: [stdout =====>] : 172.19.143.74
	
	I0507 19:36:38.070779    6544 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:36:38.071352    6544 sshutil.go:53] new ssh client: &{IP:172.19.143.74 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-600000\id_rsa Username:docker}
	I0507 19:36:38.257237    6544 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token pfycy5.c873ib6vafmwkrts --discovery-token-ca-cert-hash sha256:931f752ca063cc161db9d00a66e1e235f9a673b9dc0e49228e9ec99d810de7b1 
	I0507 19:36:38.260667    6544 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0": (4.3650863s)
	I0507 19:36:38.260726    6544 start.go:342] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.19.143.144 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0507 19:36:38.260726    6544 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token pfycy5.c873ib6vafmwkrts --discovery-token-ca-cert-hash sha256:931f752ca063cc161db9d00a66e1e235f9a673b9dc0e49228e9ec99d810de7b1 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-600000-m02"
	I0507 19:36:38.437772    6544 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0507 19:36:39.733079    6544 command_runner.go:130] > [preflight] Running pre-flight checks
	I0507 19:36:39.733079    6544 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0507 19:36:39.733079    6544 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0507 19:36:39.733079    6544 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0507 19:36:39.733980    6544 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0507 19:36:39.733980    6544 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0507 19:36:39.733980    6544 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0507 19:36:39.733980    6544 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.001618687s
	I0507 19:36:39.734049    6544 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0507 19:36:39.734049    6544 command_runner.go:130] > This node has joined the cluster:
	I0507 19:36:39.734049    6544 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0507 19:36:39.734049    6544 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0507 19:36:39.734049    6544 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0507 19:36:39.734099    6544 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token pfycy5.c873ib6vafmwkrts --discovery-token-ca-cert-hash sha256:931f752ca063cc161db9d00a66e1e235f9a673b9dc0e49228e9ec99d810de7b1 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-600000-m02": (1.4732773s)
	I0507 19:36:39.734099    6544 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0507 19:36:40.125861    6544 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0507 19:36:40.133185    6544 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-600000-m02 minikube.k8s.io/updated_at=2024_05_07T19_36_40_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=a2bee053733709aad5480b65159f65519e411d9f minikube.k8s.io/name=multinode-600000 minikube.k8s.io/primary=false
	I0507 19:36:40.244636    6544 command_runner.go:130] > node/multinode-600000-m02 labeled
	I0507 19:36:40.244701    6544 start.go:318] duration metric: took 6.3489913s to joinCluster
	I0507 19:36:40.244901    6544 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.19.143.144 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0507 19:36:40.248505    6544 out.go:177] * Verifying Kubernetes components...
	I0507 19:36:40.245276    6544 config.go:182] Loaded profile config "multinode-600000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 19:36:40.259892    6544 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 19:36:40.453340    6544 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0507 19:36:40.476279    6544 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0507 19:36:40.476279    6544 kapi.go:59] client config for multinode-600000: &rest.Config{Host:"https://172.19.143.74:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-600000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-600000\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2655b00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0507 19:36:40.477278    6544 node_ready.go:35] waiting up to 6m0s for node "multinode-600000-m02" to be "Ready" ...
	I0507 19:36:40.477278    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000-m02
	I0507 19:36:40.477278    6544 round_trippers.go:469] Request Headers:
	I0507 19:36:40.477278    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:36:40.477278    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:36:40.489203    6544 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0507 19:36:40.489203    6544 round_trippers.go:577] Response Headers:
	I0507 19:36:40.489638    6544 round_trippers.go:580]     Content-Length: 3921
	I0507 19:36:40.489638    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:36:40 GMT
	I0507 19:36:40.489638    6544 round_trippers.go:580]     Audit-Id: 4756a52c-937b-45ef-bf3b-b288b6703f91
	I0507 19:36:40.489638    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:36:40.489638    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:36:40.489638    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:36:40.489638    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:36:40.489638    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000-m02","uid":"4aaf533a-c21c-427b-b48f-82fef83a8fb3","resourceVersion":"610","creationTimestamp":"2024-05-07T19:36:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_07T19_36_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:36:39Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 2897 chars]
	I0507 19:36:40.991630    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000-m02
	I0507 19:36:40.991630    6544 round_trippers.go:469] Request Headers:
	I0507 19:36:40.991843    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:36:40.991843    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:36:40.994445    6544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:36:40.994445    6544 round_trippers.go:577] Response Headers:
	I0507 19:36:40.994445    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:36:41 GMT
	I0507 19:36:40.994740    6544 round_trippers.go:580]     Audit-Id: 6c15e20e-4ec7-4eb4-ab70-2e5484388644
	I0507 19:36:40.994740    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:36:40.994740    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:36:40.994740    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:36:40.994740    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:36:40.994740    6544 round_trippers.go:580]     Content-Length: 3921
	I0507 19:36:40.994841    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000-m02","uid":"4aaf533a-c21c-427b-b48f-82fef83a8fb3","resourceVersion":"610","creationTimestamp":"2024-05-07T19:36:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_07T19_36_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:36:39Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 2897 chars]
	I0507 19:36:41.490406    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000-m02
	I0507 19:36:41.490471    6544 round_trippers.go:469] Request Headers:
	I0507 19:36:41.490471    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:36:41.490537    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:36:41.494167    6544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:36:41.494167    6544 round_trippers.go:577] Response Headers:
	I0507 19:36:41.494167    6544 round_trippers.go:580]     Content-Length: 3921
	I0507 19:36:41.494167    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:36:41 GMT
	I0507 19:36:41.494167    6544 round_trippers.go:580]     Audit-Id: d057ace3-2cf8-43e5-be3e-662e68506df6
	I0507 19:36:41.494167    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:36:41.494167    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:36:41.494167    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:36:41.494167    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:36:41.494628    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000-m02","uid":"4aaf533a-c21c-427b-b48f-82fef83a8fb3","resourceVersion":"610","creationTimestamp":"2024-05-07T19:36:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_07T19_36_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:36:39Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 2897 chars]
	I0507 19:36:41.990274    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000-m02
	I0507 19:36:41.990274    6544 round_trippers.go:469] Request Headers:
	I0507 19:36:41.990380    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:36:41.990380    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:36:41.993629    6544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:36:41.993629    6544 round_trippers.go:577] Response Headers:
	I0507 19:36:41.993629    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:36:41.993629    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:36:41.993629    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:36:41.993629    6544 round_trippers.go:580]     Content-Length: 3921
	I0507 19:36:41.993629    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:36:42 GMT
	I0507 19:36:41.993629    6544 round_trippers.go:580]     Audit-Id: 97c38b21-05ca-47e6-9680-d759f9afd20d
	I0507 19:36:41.993629    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:36:41.994595    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000-m02","uid":"4aaf533a-c21c-427b-b48f-82fef83a8fb3","resourceVersion":"610","creationTimestamp":"2024-05-07T19:36:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_07T19_36_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:36:39Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 2897 chars]
	I0507 19:36:42.492374    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000-m02
	I0507 19:36:42.492456    6544 round_trippers.go:469] Request Headers:
	I0507 19:36:42.492456    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:36:42.492456    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:36:42.495756    6544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:36:42.495756    6544 round_trippers.go:577] Response Headers:
	I0507 19:36:42.495756    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:36:42.495756    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:36:42.495756    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:36:42.495756    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:36:42.495756    6544 round_trippers.go:580]     Content-Length: 3921
	I0507 19:36:42.495756    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:36:42 GMT
	I0507 19:36:42.495756    6544 round_trippers.go:580]     Audit-Id: ba83ad21-08e7-4420-8029-a7ae238744d3
	I0507 19:36:42.495756    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000-m02","uid":"4aaf533a-c21c-427b-b48f-82fef83a8fb3","resourceVersion":"610","creationTimestamp":"2024-05-07T19:36:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_07T19_36_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:36:39Z","fieldsType":"FieldsV1","fieldsV1":
{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}," [truncated 2897 chars]
	I0507 19:36:42.496794    6544 node_ready.go:53] node "multinode-600000-m02" has status "Ready":"False"
	I0507 19:36:42.991359    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000-m02
	I0507 19:36:42.991453    6544 round_trippers.go:469] Request Headers:
	I0507 19:36:42.991453    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:36:42.991518    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:36:42.999777    6544 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0507 19:36:42.999777    6544 round_trippers.go:577] Response Headers:
	I0507 19:36:42.999777    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:36:42.999777    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:36:42.999777    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:36:42.999777    6544 round_trippers.go:580]     Content-Length: 4030
	I0507 19:36:42.999777    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:36:43 GMT
	I0507 19:36:42.999777    6544 round_trippers.go:580]     Audit-Id: 80a28ffc-53bb-4714-8ab2-f85e23aacd45
	I0507 19:36:42.999777    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:36:42.999777    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000-m02","uid":"4aaf533a-c21c-427b-b48f-82fef83a8fb3","resourceVersion":"616","creationTimestamp":"2024-05-07T19:36:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_07T19_36_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:36:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0507 19:36:43.490463    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000-m02
	I0507 19:36:43.490540    6544 round_trippers.go:469] Request Headers:
	I0507 19:36:43.490540    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:36:43.490540    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:36:43.497377    6544 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0507 19:36:43.497377    6544 round_trippers.go:577] Response Headers:
	I0507 19:36:43.497377    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:36:43 GMT
	I0507 19:36:43.497377    6544 round_trippers.go:580]     Audit-Id: 56726f0d-5c8d-4e41-8c69-1b6af526acad
	I0507 19:36:43.497377    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:36:43.497377    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:36:43.497377    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:36:43.497377    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:36:43.497377    6544 round_trippers.go:580]     Content-Length: 4030
	I0507 19:36:43.497377    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000-m02","uid":"4aaf533a-c21c-427b-b48f-82fef83a8fb3","resourceVersion":"616","creationTimestamp":"2024-05-07T19:36:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_07T19_36_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:36:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0507 19:36:43.991945    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000-m02
	I0507 19:36:43.992024    6544 round_trippers.go:469] Request Headers:
	I0507 19:36:43.992024    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:36:43.992024    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:36:43.995610    6544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:36:43.995610    6544 round_trippers.go:577] Response Headers:
	I0507 19:36:43.995610    6544 round_trippers.go:580]     Audit-Id: 68642508-2d8a-4cde-b9db-b336d4b9c36f
	I0507 19:36:43.996185    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:36:43.996185    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:36:43.996185    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:36:43.996185    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:36:43.996185    6544 round_trippers.go:580]     Content-Length: 4030
	I0507 19:36:43.996185    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:36:44 GMT
	I0507 19:36:43.996257    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000-m02","uid":"4aaf533a-c21c-427b-b48f-82fef83a8fb3","resourceVersion":"616","creationTimestamp":"2024-05-07T19:36:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_07T19_36_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:36:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0507 19:36:44.478799    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000-m02
	I0507 19:36:44.478799    6544 round_trippers.go:469] Request Headers:
	I0507 19:36:44.478799    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:36:44.478799    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:36:44.482973    6544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:36:44.483060    6544 round_trippers.go:577] Response Headers:
	I0507 19:36:44.483060    6544 round_trippers.go:580]     Audit-Id: d2aae946-375f-48dd-97d8-56d0fda35769
	I0507 19:36:44.483060    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:36:44.483060    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:36:44.483060    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:36:44.483060    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:36:44.483060    6544 round_trippers.go:580]     Content-Length: 4030
	I0507 19:36:44.483060    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:36:44 GMT
	I0507 19:36:44.483211    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000-m02","uid":"4aaf533a-c21c-427b-b48f-82fef83a8fb3","resourceVersion":"616","creationTimestamp":"2024-05-07T19:36:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_07T19_36_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:36:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0507 19:36:44.979869    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000-m02
	I0507 19:36:44.979869    6544 round_trippers.go:469] Request Headers:
	I0507 19:36:44.979869    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:36:44.979869    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:36:44.984042    6544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:36:44.984042    6544 round_trippers.go:577] Response Headers:
	I0507 19:36:44.984042    6544 round_trippers.go:580]     Audit-Id: 39cf731e-b0dd-433f-892d-c4c28c94def1
	I0507 19:36:44.984444    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:36:44.984444    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:36:44.984444    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:36:44.984444    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:36:44.984513    6544 round_trippers.go:580]     Content-Length: 4030
	I0507 19:36:44.984513    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:36:45 GMT
	I0507 19:36:44.984763    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000-m02","uid":"4aaf533a-c21c-427b-b48f-82fef83a8fb3","resourceVersion":"616","creationTimestamp":"2024-05-07T19:36:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_07T19_36_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:36:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0507 19:36:44.985308    6544 node_ready.go:53] node "multinode-600000-m02" has status "Ready":"False"
	I0507 19:36:45.485992    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000-m02
	I0507 19:36:45.486067    6544 round_trippers.go:469] Request Headers:
	I0507 19:36:45.486067    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:36:45.486067    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:36:45.488649    6544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:36:45.489669    6544 round_trippers.go:577] Response Headers:
	I0507 19:36:45.489669    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:36:45.489669    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:36:45.489669    6544 round_trippers.go:580]     Content-Length: 4030
	I0507 19:36:45.489669    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:36:45 GMT
	I0507 19:36:45.489669    6544 round_trippers.go:580]     Audit-Id: 3c52eed7-5bf4-4344-a90e-93e69c68bf21
	I0507 19:36:45.489669    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:36:45.489669    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:36:45.489782    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000-m02","uid":"4aaf533a-c21c-427b-b48f-82fef83a8fb3","resourceVersion":"616","creationTimestamp":"2024-05-07T19:36:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_07T19_36_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:36:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0507 19:36:45.986383    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000-m02
	I0507 19:36:45.986383    6544 round_trippers.go:469] Request Headers:
	I0507 19:36:45.986383    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:36:45.986383    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:36:45.989955    6544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:36:45.989955    6544 round_trippers.go:577] Response Headers:
	I0507 19:36:45.989955    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:36:45.989955    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:36:45.989955    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:36:45.990696    6544 round_trippers.go:580]     Content-Length: 4030
	I0507 19:36:45.990696    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:36:46 GMT
	I0507 19:36:45.990696    6544 round_trippers.go:580]     Audit-Id: f6c4fa45-d7d0-443e-88e3-959ed2acdddf
	I0507 19:36:45.990696    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:36:45.990696    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000-m02","uid":"4aaf533a-c21c-427b-b48f-82fef83a8fb3","resourceVersion":"616","creationTimestamp":"2024-05-07T19:36:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_07T19_36_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:36:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0507 19:36:46.479145    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000-m02
	I0507 19:36:46.479145    6544 round_trippers.go:469] Request Headers:
	I0507 19:36:46.479207    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:36:46.479207    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:36:46.482444    6544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:36:46.482444    6544 round_trippers.go:577] Response Headers:
	I0507 19:36:46.482444    6544 round_trippers.go:580]     Audit-Id: cd38d35b-b2ed-4b3e-9f33-f37476cadc8d
	I0507 19:36:46.482444    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:36:46.482444    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:36:46.482444    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:36:46.482444    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:36:46.482444    6544 round_trippers.go:580]     Content-Length: 4030
	I0507 19:36:46.482444    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:36:46 GMT
	I0507 19:36:46.483481    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000-m02","uid":"4aaf533a-c21c-427b-b48f-82fef83a8fb3","resourceVersion":"616","creationTimestamp":"2024-05-07T19:36:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_07T19_36_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:36:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0507 19:36:46.986788    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000-m02
	I0507 19:36:46.986862    6544 round_trippers.go:469] Request Headers:
	I0507 19:36:46.986862    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:36:46.986862    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:36:46.990145    6544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:36:46.990588    6544 round_trippers.go:577] Response Headers:
	I0507 19:36:46.990588    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:36:46.990689    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:36:46.990689    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:36:46.990689    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:36:46.990689    6544 round_trippers.go:580]     Content-Length: 4030
	I0507 19:36:46.990689    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:36:47 GMT
	I0507 19:36:46.990689    6544 round_trippers.go:580]     Audit-Id: 1cbfcddf-6bfa-4d36-8812-2a51dc59ce4a
	I0507 19:36:46.990689    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000-m02","uid":"4aaf533a-c21c-427b-b48f-82fef83a8fb3","resourceVersion":"616","creationTimestamp":"2024-05-07T19:36:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_07T19_36_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:36:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0507 19:36:46.991216    6544 node_ready.go:53] node "multinode-600000-m02" has status "Ready":"False"
	I0507 19:36:47.480340    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000-m02
	I0507 19:36:47.480340    6544 round_trippers.go:469] Request Headers:
	I0507 19:36:47.480427    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:36:47.480427    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:36:47.484019    6544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:36:47.484019    6544 round_trippers.go:577] Response Headers:
	I0507 19:36:47.484019    6544 round_trippers.go:580]     Content-Length: 4030
	I0507 19:36:47.484019    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:36:47 GMT
	I0507 19:36:47.484019    6544 round_trippers.go:580]     Audit-Id: e28161b7-8244-4353-a984-da25ff341720
	I0507 19:36:47.484019    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:36:47.484019    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:36:47.484019    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:36:47.484019    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:36:47.484191    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000-m02","uid":"4aaf533a-c21c-427b-b48f-82fef83a8fb3","resourceVersion":"616","creationTimestamp":"2024-05-07T19:36:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_07T19_36_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:36:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0507 19:36:47.990304    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000-m02
	I0507 19:36:47.990304    6544 round_trippers.go:469] Request Headers:
	I0507 19:36:47.990304    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:36:47.990304    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:36:47.993895    6544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:36:47.993895    6544 round_trippers.go:577] Response Headers:
	I0507 19:36:47.993895    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:36:47.993895    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:36:47.993895    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:36:47.993895    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:36:47.993895    6544 round_trippers.go:580]     Content-Length: 4030
	I0507 19:36:47.993895    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:36:48 GMT
	I0507 19:36:47.993895    6544 round_trippers.go:580]     Audit-Id: 02a5b99e-1173-4c45-ba91-1f20a4581882
	I0507 19:36:47.993895    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000-m02","uid":"4aaf533a-c21c-427b-b48f-82fef83a8fb3","resourceVersion":"616","creationTimestamp":"2024-05-07T19:36:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_07T19_36_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:36:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0507 19:36:48.483722    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000-m02
	I0507 19:36:48.483722    6544 round_trippers.go:469] Request Headers:
	I0507 19:36:48.483722    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:36:48.483722    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:36:48.486329    6544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:36:48.486329    6544 round_trippers.go:577] Response Headers:
	I0507 19:36:48.486329    6544 round_trippers.go:580]     Audit-Id: 0e200346-e262-4a88-84d5-1ae49010c605
	I0507 19:36:48.486329    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:36:48.486329    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:36:48.487325    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:36:48.487375    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:36:48.487375    6544 round_trippers.go:580]     Content-Length: 4030
	I0507 19:36:48.487375    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:36:48 GMT
	I0507 19:36:48.487539    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000-m02","uid":"4aaf533a-c21c-427b-b48f-82fef83a8fb3","resourceVersion":"616","creationTimestamp":"2024-05-07T19:36:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_07T19_36_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:36:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0507 19:36:48.978330    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000-m02
	I0507 19:36:48.978398    6544 round_trippers.go:469] Request Headers:
	I0507 19:36:48.978398    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:36:48.978398    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:36:48.981125    6544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:36:48.982091    6544 round_trippers.go:577] Response Headers:
	I0507 19:36:48.982091    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:36:49 GMT
	I0507 19:36:48.982091    6544 round_trippers.go:580]     Audit-Id: f794ea9a-d09c-44fe-82e2-19cc178a33d7
	I0507 19:36:48.982091    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:36:48.982177    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:36:48.982177    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:36:48.982177    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:36:48.982177    6544 round_trippers.go:580]     Content-Length: 4030
	I0507 19:36:48.982284    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000-m02","uid":"4aaf533a-c21c-427b-b48f-82fef83a8fb3","resourceVersion":"616","creationTimestamp":"2024-05-07T19:36:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_07T19_36_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:36:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0507 19:36:49.484341    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000-m02
	I0507 19:36:49.484416    6544 round_trippers.go:469] Request Headers:
	I0507 19:36:49.484416    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:36:49.484461    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:36:49.491158    6544 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0507 19:36:49.491158    6544 round_trippers.go:577] Response Headers:
	I0507 19:36:49.491158    6544 round_trippers.go:580]     Content-Length: 4030
	I0507 19:36:49.491158    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:36:49 GMT
	I0507 19:36:49.491158    6544 round_trippers.go:580]     Audit-Id: 874bd907-269a-4869-80a6-dae2b979b61f
	I0507 19:36:49.491158    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:36:49.491158    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:36:49.491158    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:36:49.491158    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:36:49.491158    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000-m02","uid":"4aaf533a-c21c-427b-b48f-82fef83a8fb3","resourceVersion":"616","creationTimestamp":"2024-05-07T19:36:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_07T19_36_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:36:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3006 chars]
	I0507 19:36:49.491158    6544 node_ready.go:53] node "multinode-600000-m02" has status "Ready":"False"
	I0507 19:36:49.984311    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000-m02
	I0507 19:36:49.984311    6544 round_trippers.go:469] Request Headers:
	I0507 19:36:49.984311    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:36:49.984311    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:36:50.180286    6544 round_trippers.go:574] Response Status: 200 OK in 195 milliseconds
	I0507 19:36:50.180286    6544 round_trippers.go:577] Response Headers:
	I0507 19:36:50.180286    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:36:50.180286    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:36:50.180286    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:36:50.180286    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:36:50.180286    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:36:50 GMT
	I0507 19:36:50.180286    6544 round_trippers.go:580]     Audit-Id: 1e353e1e-f3cb-419b-a476-2983f6ccc1b6
	I0507 19:36:50.180720    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000-m02","uid":"4aaf533a-c21c-427b-b48f-82fef83a8fb3","resourceVersion":"627","creationTimestamp":"2024-05-07T19:36:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_07T19_36_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:36:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0507 19:36:50.486231    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000-m02
	I0507 19:36:50.486299    6544 round_trippers.go:469] Request Headers:
	I0507 19:36:50.486299    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:36:50.486299    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:36:50.489728    6544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:36:50.489728    6544 round_trippers.go:577] Response Headers:
	I0507 19:36:50.490110    6544 round_trippers.go:580]     Audit-Id: d7246601-2cab-4de9-8a3d-f254a525402a
	I0507 19:36:50.490110    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:36:50.490110    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:36:50.490110    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:36:50.490110    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:36:50.490110    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:36:50 GMT
	I0507 19:36:50.490397    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000-m02","uid":"4aaf533a-c21c-427b-b48f-82fef83a8fb3","resourceVersion":"627","creationTimestamp":"2024-05-07T19:36:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_07T19_36_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:36:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0507 19:36:50.993821    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000-m02
	I0507 19:36:50.994020    6544 round_trippers.go:469] Request Headers:
	I0507 19:36:50.994020    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:36:50.994020    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:36:50.997810    6544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:36:50.997810    6544 round_trippers.go:577] Response Headers:
	I0507 19:36:50.997810    6544 round_trippers.go:580]     Audit-Id: b84b9977-a0be-448e-9358-dd2c465d9c6f
	I0507 19:36:50.997810    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:36:50.997810    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:36:50.997810    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:36:50.997810    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:36:50.997810    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:36:51 GMT
	I0507 19:36:50.997810    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000-m02","uid":"4aaf533a-c21c-427b-b48f-82fef83a8fb3","resourceVersion":"627","creationTimestamp":"2024-05-07T19:36:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_07T19_36_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:36:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0507 19:36:51.487377    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000-m02
	I0507 19:36:51.487377    6544 round_trippers.go:469] Request Headers:
	I0507 19:36:51.487377    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:36:51.487377    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:36:51.491230    6544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:36:51.491230    6544 round_trippers.go:577] Response Headers:
	I0507 19:36:51.491230    6544 round_trippers.go:580]     Audit-Id: 1857866a-f398-459a-89bf-8022b7778b53
	I0507 19:36:51.491230    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:36:51.491230    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:36:51.491230    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:36:51.491230    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:36:51.491230    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:36:51 GMT
	I0507 19:36:51.491430    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000-m02","uid":"4aaf533a-c21c-427b-b48f-82fef83a8fb3","resourceVersion":"627","creationTimestamp":"2024-05-07T19:36:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_07T19_36_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:36:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0507 19:36:51.980560    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000-m02
	I0507 19:36:51.980560    6544 round_trippers.go:469] Request Headers:
	I0507 19:36:51.980560    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:36:51.980560    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:36:51.987360    6544 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0507 19:36:51.987423    6544 round_trippers.go:577] Response Headers:
	I0507 19:36:51.987423    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:36:51.987423    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:36:51.987423    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:36:51.987423    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:36:52 GMT
	I0507 19:36:51.987423    6544 round_trippers.go:580]     Audit-Id: 11df1b54-c173-48ff-bafd-df6aeb13c73b
	I0507 19:36:51.987423    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:36:51.987423    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000-m02","uid":"4aaf533a-c21c-427b-b48f-82fef83a8fb3","resourceVersion":"627","creationTimestamp":"2024-05-07T19:36:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_07T19_36_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:36:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0507 19:36:51.988081    6544 node_ready.go:53] node "multinode-600000-m02" has status "Ready":"False"
	I0507 19:36:52.487519    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000-m02
	I0507 19:36:52.487519    6544 round_trippers.go:469] Request Headers:
	I0507 19:36:52.487519    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:36:52.487519    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:36:52.491220    6544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:36:52.491220    6544 round_trippers.go:577] Response Headers:
	I0507 19:36:52.491220    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:36:52.491220    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:36:52 GMT
	I0507 19:36:52.491220    6544 round_trippers.go:580]     Audit-Id: e0234b01-b583-42f8-aa65-1703c6d23421
	I0507 19:36:52.491220    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:36:52.491220    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:36:52.491220    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:36:52.491874    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000-m02","uid":"4aaf533a-c21c-427b-b48f-82fef83a8fb3","resourceVersion":"627","creationTimestamp":"2024-05-07T19:36:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_07T19_36_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:36:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0507 19:36:52.994496    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000-m02
	I0507 19:36:52.994496    6544 round_trippers.go:469] Request Headers:
	I0507 19:36:52.994496    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:36:52.994496    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:36:52.999476    6544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:36:52.999476    6544 round_trippers.go:577] Response Headers:
	I0507 19:36:52.999476    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:36:52.999476    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:36:52.999476    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:36:52.999476    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:36:52.999476    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:36:53 GMT
	I0507 19:36:52.999476    6544 round_trippers.go:580]     Audit-Id: ccfbef63-2646-4965-87ef-a4e24d8615d3
	I0507 19:36:53.000480    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000-m02","uid":"4aaf533a-c21c-427b-b48f-82fef83a8fb3","resourceVersion":"627","creationTimestamp":"2024-05-07T19:36:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_07T19_36_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:36:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0507 19:36:53.484384    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000-m02
	I0507 19:36:53.484452    6544 round_trippers.go:469] Request Headers:
	I0507 19:36:53.484516    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:36:53.484516    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:36:53.487671    6544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:36:53.487671    6544 round_trippers.go:577] Response Headers:
	I0507 19:36:53.487671    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:36:53.487671    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:36:53.487671    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:36:53.487671    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:36:53.487671    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:36:53 GMT
	I0507 19:36:53.487671    6544 round_trippers.go:580]     Audit-Id: 259e8c01-2d1b-4665-a33e-7e054759897f
	I0507 19:36:53.487671    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000-m02","uid":"4aaf533a-c21c-427b-b48f-82fef83a8fb3","resourceVersion":"627","creationTimestamp":"2024-05-07T19:36:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_07T19_36_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:36:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0507 19:36:53.991405    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000-m02
	I0507 19:36:53.991405    6544 round_trippers.go:469] Request Headers:
	I0507 19:36:53.991405    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:36:53.991405    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:36:53.994751    6544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:36:53.994751    6544 round_trippers.go:577] Response Headers:
	I0507 19:36:53.994751    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:36:53.994751    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:36:53.994751    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:36:54 GMT
	I0507 19:36:53.994751    6544 round_trippers.go:580]     Audit-Id: ff381ce9-6337-41eb-b1dc-a5cbe2e6ecb1
	I0507 19:36:53.994751    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:36:53.994936    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:36:53.995294    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000-m02","uid":"4aaf533a-c21c-427b-b48f-82fef83a8fb3","resourceVersion":"627","creationTimestamp":"2024-05-07T19:36:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_07T19_36_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:36:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0507 19:36:53.995294    6544 node_ready.go:53] node "multinode-600000-m02" has status "Ready":"False"
	I0507 19:36:54.492996    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000-m02
	I0507 19:36:54.493077    6544 round_trippers.go:469] Request Headers:
	I0507 19:36:54.493156    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:36:54.493156    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:36:54.496897    6544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:36:54.496897    6544 round_trippers.go:577] Response Headers:
	I0507 19:36:54.496897    6544 round_trippers.go:580]     Audit-Id: 4e0c39e7-2028-4cf8-b722-7b1110749e28
	I0507 19:36:54.496897    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:36:54.496897    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:36:54.496897    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:36:54.496897    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:36:54.496897    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:36:54 GMT
	I0507 19:36:54.497381    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000-m02","uid":"4aaf533a-c21c-427b-b48f-82fef83a8fb3","resourceVersion":"627","creationTimestamp":"2024-05-07T19:36:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_07T19_36_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:36:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0507 19:36:54.980241    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000-m02
	I0507 19:36:54.980241    6544 round_trippers.go:469] Request Headers:
	I0507 19:36:54.980241    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:36:54.980241    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:36:54.983140    6544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:36:54.983713    6544 round_trippers.go:577] Response Headers:
	I0507 19:36:54.983713    6544 round_trippers.go:580]     Audit-Id: acd59a7b-110b-415d-b757-7f4a9d4bc2f9
	I0507 19:36:54.983764    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:36:54.983764    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:36:54.983764    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:36:54.983764    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:36:54.983764    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:36:55 GMT
	I0507 19:36:54.983764    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000-m02","uid":"4aaf533a-c21c-427b-b48f-82fef83a8fb3","resourceVersion":"627","creationTimestamp":"2024-05-07T19:36:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_07T19_36_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:36:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0507 19:36:55.485296    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000-m02
	I0507 19:36:55.485380    6544 round_trippers.go:469] Request Headers:
	I0507 19:36:55.485380    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:36:55.485464    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:36:55.490011    6544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:36:55.490367    6544 round_trippers.go:577] Response Headers:
	I0507 19:36:55.490367    6544 round_trippers.go:580]     Audit-Id: 3b447390-27d3-48f5-ba24-88d2efe70e6b
	I0507 19:36:55.490367    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:36:55.490367    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:36:55.490367    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:36:55.490367    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:36:55.490367    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:36:55 GMT
	I0507 19:36:55.490666    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000-m02","uid":"4aaf533a-c21c-427b-b48f-82fef83a8fb3","resourceVersion":"627","creationTimestamp":"2024-05-07T19:36:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_07T19_36_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:36:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0507 19:36:55.984599    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000-m02
	I0507 19:36:55.984599    6544 round_trippers.go:469] Request Headers:
	I0507 19:36:55.984896    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:36:55.984896    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:36:55.988208    6544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:36:55.988269    6544 round_trippers.go:577] Response Headers:
	I0507 19:36:55.988269    6544 round_trippers.go:580]     Audit-Id: 9902cdc2-a13e-4308-bd9b-ceb119e57f0f
	I0507 19:36:55.988269    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:36:55.988269    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:36:55.988269    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:36:55.988269    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:36:55.988269    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:36:56 GMT
	I0507 19:36:55.988269    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000-m02","uid":"4aaf533a-c21c-427b-b48f-82fef83a8fb3","resourceVersion":"627","creationTimestamp":"2024-05-07T19:36:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_07T19_36_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:36:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0507 19:36:56.485432    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000-m02
	I0507 19:36:56.485432    6544 round_trippers.go:469] Request Headers:
	I0507 19:36:56.485432    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:36:56.485539    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:36:56.488901    6544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:36:56.488901    6544 round_trippers.go:577] Response Headers:
	I0507 19:36:56.488901    6544 round_trippers.go:580]     Audit-Id: da1951c3-d5a0-48b2-88be-676d0c1b6307
	I0507 19:36:56.488901    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:36:56.488901    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:36:56.488901    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:36:56.488901    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:36:56.488901    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:36:56 GMT
	I0507 19:36:56.489505    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000-m02","uid":"4aaf533a-c21c-427b-b48f-82fef83a8fb3","resourceVersion":"627","creationTimestamp":"2024-05-07T19:36:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_07T19_36_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:36:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0507 19:36:56.490226    6544 node_ready.go:53] node "multinode-600000-m02" has status "Ready":"False"
	I0507 19:36:56.982652    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000-m02
	I0507 19:36:56.982742    6544 round_trippers.go:469] Request Headers:
	I0507 19:36:56.982828    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:36:56.982828    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:36:56.986155    6544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:36:56.986155    6544 round_trippers.go:577] Response Headers:
	I0507 19:36:56.986403    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:36:56.986403    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:36:56.986403    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:36:56.986403    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:36:57 GMT
	I0507 19:36:56.986403    6544 round_trippers.go:580]     Audit-Id: 7d35f06b-da08-4c31-b27a-debf7c8e3e50
	I0507 19:36:56.986403    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:36:56.986483    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000-m02","uid":"4aaf533a-c21c-427b-b48f-82fef83a8fb3","resourceVersion":"627","creationTimestamp":"2024-05-07T19:36:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_07T19_36_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:36:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0507 19:36:57.481192    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000-m02
	I0507 19:36:57.481192    6544 round_trippers.go:469] Request Headers:
	I0507 19:36:57.481192    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:36:57.481192    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:36:57.485162    6544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:36:57.485162    6544 round_trippers.go:577] Response Headers:
	I0507 19:36:57.485162    6544 round_trippers.go:580]     Audit-Id: c0f26a47-762d-458e-a0c8-e176d11922ac
	I0507 19:36:57.485162    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:36:57.485162    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:36:57.485162    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:36:57.485162    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:36:57.485162    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:36:57 GMT
	I0507 19:36:57.485605    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000-m02","uid":"4aaf533a-c21c-427b-b48f-82fef83a8fb3","resourceVersion":"627","creationTimestamp":"2024-05-07T19:36:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_07T19_36_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:36:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0507 19:36:57.981601    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000-m02
	I0507 19:36:57.981681    6544 round_trippers.go:469] Request Headers:
	I0507 19:36:57.981681    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:36:57.981681    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:36:57.986985    6544 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 19:36:57.986985    6544 round_trippers.go:577] Response Headers:
	I0507 19:36:57.986985    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:36:58 GMT
	I0507 19:36:57.986985    6544 round_trippers.go:580]     Audit-Id: 9abf65bd-9690-470c-b7f3-d0ea1942397f
	I0507 19:36:57.986985    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:36:57.986985    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:36:57.986985    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:36:57.986985    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:36:57.987600    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000-m02","uid":"4aaf533a-c21c-427b-b48f-82fef83a8fb3","resourceVersion":"627","creationTimestamp":"2024-05-07T19:36:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_07T19_36_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:36:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0507 19:36:58.484146    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000-m02
	I0507 19:36:58.484396    6544 round_trippers.go:469] Request Headers:
	I0507 19:36:58.484396    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:36:58.484396    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:36:58.497042    6544 round_trippers.go:574] Response Status: 200 OK in 12 milliseconds
	I0507 19:36:58.497042    6544 round_trippers.go:577] Response Headers:
	I0507 19:36:58.497042    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:36:58.497042    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:36:58.497042    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:36:58 GMT
	I0507 19:36:58.497042    6544 round_trippers.go:580]     Audit-Id: 46a7557e-8164-407c-a618-622f62acc36f
	I0507 19:36:58.497042    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:36:58.497042    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:36:58.497385    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000-m02","uid":"4aaf533a-c21c-427b-b48f-82fef83a8fb3","resourceVersion":"627","creationTimestamp":"2024-05-07T19:36:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_07T19_36_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:36:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0507 19:36:58.497385    6544 node_ready.go:53] node "multinode-600000-m02" has status "Ready":"False"
	I0507 19:36:58.981593    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000-m02
	I0507 19:36:58.981721    6544 round_trippers.go:469] Request Headers:
	I0507 19:36:58.981721    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:36:58.981721    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:36:58.984385    6544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:36:58.984385    6544 round_trippers.go:577] Response Headers:
	I0507 19:36:58.985268    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:36:58.985268    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:36:58.985268    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:36:58.985268    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:36:59 GMT
	I0507 19:36:58.985268    6544 round_trippers.go:580]     Audit-Id: ed70aac1-f88f-407a-bb04-db5bdd47bc79
	I0507 19:36:58.985268    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:36:58.985544    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000-m02","uid":"4aaf533a-c21c-427b-b48f-82fef83a8fb3","resourceVersion":"627","creationTimestamp":"2024-05-07T19:36:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_07T19_36_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:36:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0507 19:36:59.482844    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000-m02
	I0507 19:36:59.482844    6544 round_trippers.go:469] Request Headers:
	I0507 19:36:59.482844    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:36:59.482844    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:36:59.486674    6544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:36:59.486674    6544 round_trippers.go:577] Response Headers:
	I0507 19:36:59.486674    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:36:59 GMT
	I0507 19:36:59.486674    6544 round_trippers.go:580]     Audit-Id: 0307ba9f-5554-478d-becf-c83ec6780b93
	I0507 19:36:59.486674    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:36:59.487342    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:36:59.487378    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:36:59.487378    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:36:59.487658    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000-m02","uid":"4aaf533a-c21c-427b-b48f-82fef83a8fb3","resourceVersion":"627","creationTimestamp":"2024-05-07T19:36:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_07T19_36_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:36:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3398 chars]
	I0507 19:36:59.987741    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000-m02
	I0507 19:36:59.987822    6544 round_trippers.go:469] Request Headers:
	I0507 19:36:59.987822    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:36:59.987822    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:36:59.991162    6544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:36:59.991162    6544 round_trippers.go:577] Response Headers:
	I0507 19:36:59.991162    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:36:59.991162    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:36:59.991428    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:37:00 GMT
	I0507 19:36:59.991428    6544 round_trippers.go:580]     Audit-Id: 0190dc55-1aa6-48cc-92bd-c8daa9d6b99a
	I0507 19:36:59.991428    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:36:59.991428    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:36:59.991640    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000-m02","uid":"4aaf533a-c21c-427b-b48f-82fef83a8fb3","resourceVersion":"649","creationTimestamp":"2024-05-07T19:36:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_07T19_36_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:36:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3264 chars]
	I0507 19:36:59.992289    6544 node_ready.go:49] node "multinode-600000-m02" has status "Ready":"True"
	I0507 19:36:59.992368    6544 node_ready.go:38] duration metric: took 19.5138235s for node "multinode-600000-m02" to be "Ready" ...
	I0507 19:36:59.992368    6544 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0507 19:36:59.992542    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/namespaces/kube-system/pods
	I0507 19:36:59.992542    6544 round_trippers.go:469] Request Headers:
	I0507 19:36:59.992542    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:36:59.992542    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:36:59.997232    6544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:36:59.997638    6544 round_trippers.go:577] Response Headers:
	I0507 19:36:59.997638    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:36:59.997638    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:36:59.997638    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:37:00 GMT
	I0507 19:36:59.997638    6544 round_trippers.go:580]     Audit-Id: f3a74390-cdae-440a-afda-bd16987cd198
	I0507 19:36:59.997776    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:36:59.997776    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:36:59.999640    6544 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"649"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"459","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 70438 chars]
	I0507 19:37:00.002469    6544 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-5j966" in "kube-system" namespace to be "Ready" ...
	I0507 19:37:00.002644    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5j966
	I0507 19:37:00.002644    6544 round_trippers.go:469] Request Headers:
	I0507 19:37:00.002644    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:37:00.002708    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:37:00.004915    6544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:37:00.004915    6544 round_trippers.go:577] Response Headers:
	I0507 19:37:00.004915    6544 round_trippers.go:580]     Audit-Id: ed6a577a-4d1e-4712-9565-c440c19aee79
	I0507 19:37:00.004915    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:37:00.004915    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:37:00.004915    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:37:00.004915    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:37:00.004915    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:37:00 GMT
	I0507 19:37:00.005874    6544 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"459","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6578 chars]
	I0507 19:37:00.006406    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000
	I0507 19:37:00.006465    6544 round_trippers.go:469] Request Headers:
	I0507 19:37:00.006465    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:37:00.006465    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:37:00.008591    6544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:37:00.008591    6544 round_trippers.go:577] Response Headers:
	I0507 19:37:00.008591    6544 round_trippers.go:580]     Audit-Id: 22c8993f-8eae-4955-8cd3-0b4adfc865f3
	I0507 19:37:00.008591    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:37:00.008591    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:37:00.008591    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:37:00.008591    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:37:00.008591    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:37:00 GMT
	I0507 19:37:00.008591    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"466","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0507 19:37:00.008591    6544 pod_ready.go:92] pod "coredns-7db6d8ff4d-5j966" in "kube-system" namespace has status "Ready":"True"
	I0507 19:37:00.008591    6544 pod_ready.go:81] duration metric: took 6.1213ms for pod "coredns-7db6d8ff4d-5j966" in "kube-system" namespace to be "Ready" ...
	I0507 19:37:00.008591    6544 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-600000" in "kube-system" namespace to be "Ready" ...
	I0507 19:37:00.008591    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-600000
	I0507 19:37:00.008591    6544 round_trippers.go:469] Request Headers:
	I0507 19:37:00.008591    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:37:00.008591    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:37:00.013299    6544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:37:00.013299    6544 round_trippers.go:577] Response Headers:
	I0507 19:37:00.013299    6544 round_trippers.go:580]     Audit-Id: 53de24ef-c98e-4487-b26e-0d7dd32182ff
	I0507 19:37:00.013299    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:37:00.013299    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:37:00.013299    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:37:00.013299    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:37:00.013299    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:37:00 GMT
	I0507 19:37:00.013679    6544 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-600000","namespace":"kube-system","uid":"d55601ee-11f4-432c-8170-ecc4d8212782","resourceVersion":"421","creationTimestamp":"2024-05-07T19:33:44Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.143.74:2379","kubernetes.io/config.hash":"d902475f151631231b80fe38edab39e8","kubernetes.io/config.mirror":"d902475f151631231b80fe38edab39e8","kubernetes.io/config.seen":"2024-05-07T19:33:44.165678627Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6159 chars]
	I0507 19:37:00.014539    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000
	I0507 19:37:00.014665    6544 round_trippers.go:469] Request Headers:
	I0507 19:37:00.014665    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:37:00.014665    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:37:00.017387    6544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:37:00.017387    6544 round_trippers.go:577] Response Headers:
	I0507 19:37:00.017387    6544 round_trippers.go:580]     Audit-Id: 6560c6e3-7a26-4396-a5eb-167f13b120aa
	I0507 19:37:00.017387    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:37:00.017387    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:37:00.017387    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:37:00.017387    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:37:00.017387    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:37:00 GMT
	I0507 19:37:00.017969    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"466","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0507 19:37:00.018287    6544 pod_ready.go:92] pod "etcd-multinode-600000" in "kube-system" namespace has status "Ready":"True"
	I0507 19:37:00.018419    6544 pod_ready.go:81] duration metric: took 9.8269ms for pod "etcd-multinode-600000" in "kube-system" namespace to be "Ready" ...
	I0507 19:37:00.018419    6544 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-600000" in "kube-system" namespace to be "Ready" ...
	I0507 19:37:00.018507    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-600000
	I0507 19:37:00.018507    6544 round_trippers.go:469] Request Headers:
	I0507 19:37:00.018596    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:37:00.018596    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:37:00.021179    6544 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0507 19:37:00.021179    6544 round_trippers.go:577] Response Headers:
	I0507 19:37:00.021179    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:37:00.021179    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:37:00.021265    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:37:00 GMT
	I0507 19:37:00.021265    6544 round_trippers.go:580]     Audit-Id: faa520eb-6d3f-4256-a496-b8ccf630f3a8
	I0507 19:37:00.021265    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:37:00.021265    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:37:00.021396    6544 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-600000","namespace":"kube-system","uid":"c2ba4e1a-3041-4395-a246-9dd28358b95a","resourceVersion":"420","creationTimestamp":"2024-05-07T19:33:44Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.143.74:8443","kubernetes.io/config.hash":"b4a96b44957f27b92ef21190115bc428","kubernetes.io/config.mirror":"b4a96b44957f27b92ef21190115bc428","kubernetes.io/config.seen":"2024-05-07T19:33:44.165672227Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7694 chars]
	I0507 19:37:00.021750    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000
	I0507 19:37:00.021750    6544 round_trippers.go:469] Request Headers:
	I0507 19:37:00.021750    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:37:00.021750    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:37:00.024367    6544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:37:00.025204    6544 round_trippers.go:577] Response Headers:
	I0507 19:37:00.025204    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:37:00.025204    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:37:00 GMT
	I0507 19:37:00.025204    6544 round_trippers.go:580]     Audit-Id: 55f984c8-fa05-4cbf-bd4f-269f5c459739
	I0507 19:37:00.025204    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:37:00.025204    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:37:00.025204    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:37:00.025401    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"466","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0507 19:37:00.025860    6544 pod_ready.go:92] pod "kube-apiserver-multinode-600000" in "kube-system" namespace has status "Ready":"True"
	I0507 19:37:00.025925    6544 pod_ready.go:81] duration metric: took 7.5057ms for pod "kube-apiserver-multinode-600000" in "kube-system" namespace to be "Ready" ...
	I0507 19:37:00.025925    6544 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-600000" in "kube-system" namespace to be "Ready" ...
	I0507 19:37:00.026020    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-600000
	I0507 19:37:00.026020    6544 round_trippers.go:469] Request Headers:
	I0507 19:37:00.026020    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:37:00.026020    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:37:00.028625    6544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:37:00.028625    6544 round_trippers.go:577] Response Headers:
	I0507 19:37:00.028625    6544 round_trippers.go:580]     Audit-Id: ba7dd5de-7c81-427f-a251-fdb8b5b28520
	I0507 19:37:00.028625    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:37:00.028625    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:37:00.028625    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:37:00.028625    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:37:00.028625    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:37:00 GMT
	I0507 19:37:00.028625    6544 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-600000","namespace":"kube-system","uid":"b960b526-da40-480d-9a72-9ab8c7f2989a","resourceVersion":"418","creationTimestamp":"2024-05-07T19:33:43Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f5d6aa60dc93b5e562f37ed2236c3022","kubernetes.io/config.mirror":"f5d6aa60dc93b5e562f37ed2236c3022","kubernetes.io/config.seen":"2024-05-07T19:33:37.010155750Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7264 chars]
	I0507 19:37:00.028625    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000
	I0507 19:37:00.028625    6544 round_trippers.go:469] Request Headers:
	I0507 19:37:00.028625    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:37:00.028625    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:37:00.031046    6544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:37:00.031046    6544 round_trippers.go:577] Response Headers:
	I0507 19:37:00.032030    6544 round_trippers.go:580]     Audit-Id: 46a10226-35d4-402d-9b68-8cdb816d14a2
	I0507 19:37:00.032030    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:37:00.032030    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:37:00.032030    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:37:00.032030    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:37:00.032030    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:37:00 GMT
	I0507 19:37:00.032198    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"466","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0507 19:37:00.032251    6544 pod_ready.go:92] pod "kube-controller-manager-multinode-600000" in "kube-system" namespace has status "Ready":"True"
	I0507 19:37:00.032251    6544 pod_ready.go:81] duration metric: took 6.3261ms for pod "kube-controller-manager-multinode-600000" in "kube-system" namespace to be "Ready" ...
	I0507 19:37:00.032251    6544 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9fb6t" in "kube-system" namespace to be "Ready" ...
	I0507 19:37:00.190517    6544 request.go:629] Waited for 157.9775ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.143.74:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9fb6t
	I0507 19:37:00.190606    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9fb6t
	I0507 19:37:00.190606    6544 round_trippers.go:469] Request Headers:
	I0507 19:37:00.190606    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:37:00.190606    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:37:00.192907    6544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:37:00.192907    6544 round_trippers.go:577] Response Headers:
	I0507 19:37:00.192907    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:37:00.192907    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:37:00 GMT
	I0507 19:37:00.192907    6544 round_trippers.go:580]     Audit-Id: 5107e65e-043b-452d-b703-81ea30f06f88
	I0507 19:37:00.192907    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:37:00.192907    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:37:00.192907    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:37:00.193861    6544 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-9fb6t","generateName":"kube-proxy-","namespace":"kube-system","uid":"f91cc93c-cb87-4494-9e11-b3bf74b9311d","resourceVersion":"631","creationTimestamp":"2024-05-07T19:36:39Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"952e0024-0710-460c-920c-3959ceadbd10","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:36:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"952e0024-0710-460c-920c-3959ceadbd10\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5841 chars]
	I0507 19:37:00.392385    6544 request.go:629] Waited for 197.9494ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.143.74:8443/api/v1/nodes/multinode-600000-m02
	I0507 19:37:00.392933    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000-m02
	I0507 19:37:00.392933    6544 round_trippers.go:469] Request Headers:
	I0507 19:37:00.393028    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:37:00.393028    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:37:00.395400    6544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:37:00.395400    6544 round_trippers.go:577] Response Headers:
	I0507 19:37:00.395400    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:37:00.395400    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:37:00 GMT
	I0507 19:37:00.395400    6544 round_trippers.go:580]     Audit-Id: bfcf0015-30b9-4804-8d21-a7f64590ad36
	I0507 19:37:00.395400    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:37:00.395400    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:37:00.395400    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:37:00.396487    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000-m02","uid":"4aaf533a-c21c-427b-b48f-82fef83a8fb3","resourceVersion":"649","creationTimestamp":"2024-05-07T19:36:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_07T19_36_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:36:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"
f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"man [truncated 3264 chars]
	I0507 19:37:00.396927    6544 pod_ready.go:92] pod "kube-proxy-9fb6t" in "kube-system" namespace has status "Ready":"True"
	I0507 19:37:00.396927    6544 pod_ready.go:81] duration metric: took 364.6522ms for pod "kube-proxy-9fb6t" in "kube-system" namespace to be "Ready" ...
	I0507 19:37:00.396927    6544 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-c9gw5" in "kube-system" namespace to be "Ready" ...
	I0507 19:37:00.594660    6544 request.go:629] Waited for 197.3878ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.143.74:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c9gw5
	I0507 19:37:00.594752    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c9gw5
	I0507 19:37:00.594752    6544 round_trippers.go:469] Request Headers:
	I0507 19:37:00.594752    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:37:00.594752    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:37:00.597458    6544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:37:00.597458    6544 round_trippers.go:577] Response Headers:
	I0507 19:37:00.598452    6544 round_trippers.go:580]     Audit-Id: 436c329e-3c28-4f97-befe-fac576011fa0
	I0507 19:37:00.598452    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:37:00.598452    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:37:00.598452    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:37:00.598452    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:37:00.598452    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:37:00 GMT
	I0507 19:37:00.598644    6544 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-c9gw5","generateName":"kube-proxy-","namespace":"kube-system","uid":"9a39807c-6243-4aa2-86f4-8626031c80a6","resourceVersion":"414","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"952e0024-0710-460c-920c-3959ceadbd10","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"952e0024-0710-460c-920c-3959ceadbd10\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5828 chars]
	I0507 19:37:00.797671    6544 request.go:629] Waited for 198.0794ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.143.74:8443/api/v1/nodes/multinode-600000
	I0507 19:37:00.798161    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000
	I0507 19:37:00.798161    6544 round_trippers.go:469] Request Headers:
	I0507 19:37:00.798161    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:37:00.798161    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:37:00.800338    6544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:37:00.800338    6544 round_trippers.go:577] Response Headers:
	I0507 19:37:00.800338    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:37:00.800338    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:37:00.800338    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:37:00.800338    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:37:00.800338    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:37:01 GMT
	I0507 19:37:00.800338    6544 round_trippers.go:580]     Audit-Id: 94b3eb8e-eb84-4ab9-962d-fc2fac96c139
	I0507 19:37:00.801387    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"466","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0507 19:37:00.801853    6544 pod_ready.go:92] pod "kube-proxy-c9gw5" in "kube-system" namespace has status "Ready":"True"
	I0507 19:37:00.801853    6544 pod_ready.go:81] duration metric: took 404.8999ms for pod "kube-proxy-c9gw5" in "kube-system" namespace to be "Ready" ...
	I0507 19:37:00.801939    6544 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-600000" in "kube-system" namespace to be "Ready" ...
	I0507 19:37:01.000192    6544 request.go:629] Waited for 197.9208ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.143.74:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-600000
	I0507 19:37:01.000521    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-600000
	I0507 19:37:01.000623    6544 round_trippers.go:469] Request Headers:
	I0507 19:37:01.000623    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:37:01.000708    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:37:01.003411    6544 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:37:01.004061    6544 round_trippers.go:577] Response Headers:
	I0507 19:37:01.004061    6544 round_trippers.go:580]     Audit-Id: c9de2ca5-2baf-473a-9be6-c356665e6422
	I0507 19:37:01.004061    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:37:01.004061    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:37:01.004061    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:37:01.004061    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:37:01.004061    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:37:01 GMT
	I0507 19:37:01.004278    6544 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-600000","namespace":"kube-system","uid":"ec3ac949-cb83-49be-a908-c93e23135ae8","resourceVersion":"419","creationTimestamp":"2024-05-07T19:33:44Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7c4ee79f6d4f6adb00b636f817445fef","kubernetes.io/config.mirror":"7c4ee79f6d4f6adb00b636f817445fef","kubernetes.io/config.seen":"2024-05-07T19:33:44.165677427Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4994 chars]
	I0507 19:37:01.188779    6544 request.go:629] Waited for 184.003ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.143.74:8443/api/v1/nodes/multinode-600000
	I0507 19:37:01.189045    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes/multinode-600000
	I0507 19:37:01.189045    6544 round_trippers.go:469] Request Headers:
	I0507 19:37:01.189045    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:37:01.189045    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:37:01.193028    6544 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:37:01.193028    6544 round_trippers.go:577] Response Headers:
	I0507 19:37:01.193028    6544 round_trippers.go:580]     Audit-Id: e97789c4-9a1a-4b64-a4d0-947f7aac0f4a
	I0507 19:37:01.193028    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:37:01.193028    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:37:01.193028    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:37:01.193028    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:37:01.193028    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:37:01 GMT
	I0507 19:37:01.193816    6544 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"466","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","api
Version":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","fi [truncated 4959 chars]
	I0507 19:37:01.194541    6544 pod_ready.go:92] pod "kube-scheduler-multinode-600000" in "kube-system" namespace has status "Ready":"True"
	I0507 19:37:01.194541    6544 pod_ready.go:81] duration metric: took 392.5767ms for pod "kube-scheduler-multinode-600000" in "kube-system" namespace to be "Ready" ...
	I0507 19:37:01.194667    6544 pod_ready.go:38] duration metric: took 1.2022207s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0507 19:37:01.194719    6544 system_svc.go:44] waiting for kubelet service to be running ....
	I0507 19:37:01.203552    6544 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0507 19:37:01.227617    6544 system_svc.go:56] duration metric: took 32.8965ms WaitForService to wait for kubelet
	I0507 19:37:01.228281    6544 kubeadm.go:576] duration metric: took 20.9820181s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0507 19:37:01.228372    6544 node_conditions.go:102] verifying NodePressure condition ...
	I0507 19:37:01.390572    6544 request.go:629] Waited for 162.0706ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.143.74:8443/api/v1/nodes
	I0507 19:37:01.390873    6544 round_trippers.go:463] GET https://172.19.143.74:8443/api/v1/nodes
	I0507 19:37:01.390873    6544 round_trippers.go:469] Request Headers:
	I0507 19:37:01.390873    6544 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:37:01.390873    6544 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:37:01.395770    6544 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:37:01.395770    6544 round_trippers.go:577] Response Headers:
	I0507 19:37:01.395770    6544 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:37:01.395770    6544 round_trippers.go:580]     Date: Tue, 07 May 2024 19:37:01 GMT
	I0507 19:37:01.395967    6544 round_trippers.go:580]     Audit-Id: 6227c782-a1c9-4353-abd1-a77ec4baf3ea
	I0507 19:37:01.395967    6544 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:37:01.395967    6544 round_trippers.go:580]     Content-Type: application/json
	I0507 19:37:01.395967    6544 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:37:01.396313    6544 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"651"},"items":[{"metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"466","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFiel
ds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time" [truncated 9268 chars]
	I0507 19:37:01.397102    6544 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0507 19:37:01.397225    6544 node_conditions.go:123] node cpu capacity is 2
	I0507 19:37:01.397225    6544 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0507 19:37:01.397225    6544 node_conditions.go:123] node cpu capacity is 2
	I0507 19:37:01.397225    6544 node_conditions.go:105] duration metric: took 168.8421ms to run NodePressure ...
	I0507 19:37:01.397225    6544 start.go:240] waiting for startup goroutines ...
	I0507 19:37:01.397500    6544 start.go:254] writing updated cluster config ...
	I0507 19:37:01.406702    6544 ssh_runner.go:195] Run: rm -f paused
	I0507 19:37:01.520042    6544 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0507 19:37:01.523628    6544 out.go:177] * Done! kubectl is now configured to use "multinode-600000" cluster and "default" namespace by default
	
	
	==> Docker <==
	May 07 19:34:11 multinode-600000 dockerd[1330]: time="2024-05-07T19:34:11.328450347Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 07 19:34:11 multinode-600000 dockerd[1330]: time="2024-05-07T19:34:11.342292437Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 07 19:34:11 multinode-600000 dockerd[1330]: time="2024-05-07T19:34:11.342478849Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 07 19:34:11 multinode-600000 dockerd[1330]: time="2024-05-07T19:34:11.342499951Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 07 19:34:11 multinode-600000 dockerd[1330]: time="2024-05-07T19:34:11.343392608Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 07 19:34:11 multinode-600000 cri-dockerd[1229]: time="2024-05-07T19:34:11Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/57950c0fdcbe4c7e6d3490c6477c947eac153e908d8e81090ef8205a050bb14c/resolv.conf as [nameserver 172.19.128.1]"
	May 07 19:34:11 multinode-600000 cri-dockerd[1229]: time="2024-05-07T19:34:11Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/99af61c6e282aa13c7209e469e5e354f24968796fc455a65fdf2e8611f760994/resolv.conf as [nameserver 172.19.128.1]"
	May 07 19:34:11 multinode-600000 dockerd[1330]: time="2024-05-07T19:34:11.697267259Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 07 19:34:11 multinode-600000 dockerd[1330]: time="2024-05-07T19:34:11.698385028Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 07 19:34:11 multinode-600000 dockerd[1330]: time="2024-05-07T19:34:11.698402029Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 07 19:34:11 multinode-600000 dockerd[1330]: time="2024-05-07T19:34:11.698487434Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 07 19:34:11 multinode-600000 dockerd[1330]: time="2024-05-07T19:34:11.800263430Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 07 19:34:11 multinode-600000 dockerd[1330]: time="2024-05-07T19:34:11.800898670Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 07 19:34:11 multinode-600000 dockerd[1330]: time="2024-05-07T19:34:11.801176287Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 07 19:34:11 multinode-600000 dockerd[1330]: time="2024-05-07T19:34:11.801976036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 07 19:37:23 multinode-600000 dockerd[1330]: time="2024-05-07T19:37:23.784386954Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 07 19:37:23 multinode-600000 dockerd[1330]: time="2024-05-07T19:37:23.784479959Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 07 19:37:23 multinode-600000 dockerd[1330]: time="2024-05-07T19:37:23.784499561Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 07 19:37:23 multinode-600000 dockerd[1330]: time="2024-05-07T19:37:23.785247007Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 07 19:37:23 multinode-600000 cri-dockerd[1229]: time="2024-05-07T19:37:23Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/4afb10dc8b11575b4eaa25a6b283141c6e029c9b44d3db3a69e4c934171b778e/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	May 07 19:37:25 multinode-600000 cri-dockerd[1229]: time="2024-05-07T19:37:25Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	May 07 19:37:25 multinode-600000 dockerd[1330]: time="2024-05-07T19:37:25.326460906Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	May 07 19:37:25 multinode-600000 dockerd[1330]: time="2024-05-07T19:37:25.327924702Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	May 07 19:37:25 multinode-600000 dockerd[1330]: time="2024-05-07T19:37:25.327941803Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	May 07 19:37:25 multinode-600000 dockerd[1330]: time="2024-05-07T19:37:25.328627348Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	66301c2be7060       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   45 seconds ago      Running             busybox                   0                   4afb10dc8b115       busybox-fc5497c4f-gcqlv
	9550b237d8d7b       cbb01a7bd410d                                                                                         3 minutes ago       Running             coredns                   0                   99af61c6e282a       coredns-7db6d8ff4d-5j966
	232351adf489a       6e38f40d628db                                                                                         3 minutes ago       Running             storage-provisioner       0                   57950c0fdcbe4       storage-provisioner
	2d49ad078ed35       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              4 minutes ago       Running             kindnet-cni               0                   58ebd877d77fb       kindnet-zw4r9
	aa9692c1fbd3b       a0bf559e280cf                                                                                         4 minutes ago       Running             kube-proxy                0                   70cff02905e8f       kube-proxy-c9gw5
	1ad9d59483256       c42f13656d0b2                                                                                         4 minutes ago       Running             kube-apiserver            0                   86921e7643746       kube-apiserver-multinode-600000
	7cefdac2050fa       259c8277fcbbc                                                                                         4 minutes ago       Running             kube-scheduler            0                   75f27faec2ed6       kube-scheduler-multinode-600000
	3067f16e2e380       c7aad43836fa5                                                                                         4 minutes ago       Running             kube-controller-manager   0                   af16a92d7c1cc       kube-controller-manager-multinode-600000
	675dcdcafeef0       3861cfcd7c04c                                                                                         4 minutes ago       Running             etcd                      0                   ca0d420373470       etcd-multinode-600000
	
	
	==> coredns [9550b237d8d7] <==
	[INFO] 10.244.1.2:54956 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000091106s
	[INFO] 10.244.0.3:37511 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00031542s
	[INFO] 10.244.0.3:47331 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000061304s
	[INFO] 10.244.0.3:36195 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000211814s
	[INFO] 10.244.0.3:37240 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00014531s
	[INFO] 10.244.0.3:56992 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.00014411s
	[INFO] 10.244.0.3:53922 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000127508s
	[INFO] 10.244.0.3:51034 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000225815s
	[INFO] 10.244.0.3:45123 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000130808s
	[INFO] 10.244.1.2:53185 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000190512s
	[INFO] 10.244.1.2:47331 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000056804s
	[INFO] 10.244.1.2:42551 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000058104s
	[INFO] 10.244.1.2:47860 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000057104s
	[INFO] 10.244.0.3:53037 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000190312s
	[INFO] 10.244.0.3:60613 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000143109s
	[INFO] 10.244.0.3:33867 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000069105s
	[INFO] 10.244.0.3:40289 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00014191s
	[INFO] 10.244.1.2:55673 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000204514s
	[INFO] 10.244.1.2:46474 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000132609s
	[INFO] 10.244.1.2:48070 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000170211s
	[INFO] 10.244.1.2:56147 - 5 "PTR IN 1.128.19.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000093806s
	[INFO] 10.244.0.3:39426 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000107507s
	[INFO] 10.244.0.3:42569 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000295619s
	[INFO] 10.244.0.3:56970 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000267917s
	[INFO] 10.244.0.3:55625 - 5 "PTR IN 1.128.19.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00014751s
	
	
	==> describe nodes <==
	Name:               multinode-600000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-600000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a2bee053733709aad5480b65159f65519e411d9f
	                    minikube.k8s.io/name=multinode-600000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_07T19_33_45_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 07 May 2024 19:33:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-600000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 07 May 2024 19:38:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 07 May 2024 19:37:49 +0000   Tue, 07 May 2024 19:33:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 07 May 2024 19:37:49 +0000   Tue, 07 May 2024 19:33:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 07 May 2024 19:37:49 +0000   Tue, 07 May 2024 19:33:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 07 May 2024 19:37:49 +0000   Tue, 07 May 2024 19:34:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.143.74
	  Hostname:    multinode-600000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 0d554687bc774c09994c27b2eb928b75
	  System UUID:                f3433f71-57fc-a747-9f8d-4f98c0c4b458
	  Boot ID:                    292ba9e8-0260-40f1-9f45-e386d20ffce9
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-gcqlv                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         47s
	  kube-system                 coredns-7db6d8ff4d-5j966                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     4m12s
	  kube-system                 etcd-multinode-600000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m26s
	  kube-system                 kindnet-zw4r9                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m12s
	  kube-system                 kube-apiserver-multinode-600000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m26s
	  kube-system                 kube-controller-manager-multinode-600000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m27s
	  kube-system                 kube-proxy-c9gw5                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m12s
	  kube-system                 kube-scheduler-multinode-600000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m26s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 4m10s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  4m33s (x8 over 4m33s)  kubelet          Node multinode-600000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m33s (x8 over 4m33s)  kubelet          Node multinode-600000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m33s (x7 over 4m33s)  kubelet          Node multinode-600000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m33s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m26s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m26s                  kubelet          Node multinode-600000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m26s                  kubelet          Node multinode-600000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m26s                  kubelet          Node multinode-600000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m13s                  node-controller  Node multinode-600000 event: Registered Node multinode-600000 in Controller
	  Normal  NodeReady                4m                     kubelet          Node multinode-600000 status is now: NodeReady
	
	
	Name:               multinode-600000-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-600000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a2bee053733709aad5480b65159f65519e411d9f
	                    minikube.k8s.io/name=multinode-600000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_07T19_36_40_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 07 May 2024 19:36:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-600000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 07 May 2024 19:38:01 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 07 May 2024 19:37:40 +0000   Tue, 07 May 2024 19:36:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 07 May 2024 19:37:40 +0000   Tue, 07 May 2024 19:36:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 07 May 2024 19:37:40 +0000   Tue, 07 May 2024 19:36:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 07 May 2024 19:37:40 +0000   Tue, 07 May 2024 19:36:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.143.144
	  Hostname:    multinode-600000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 34eb4e78cde1423b93517d0087c85f3c
	  System UUID:                7ed694c3-4cb4-954c-b244-d0ff36461420
	  Boot ID:                    6dd39eeb-a923-4a09-93c8-8c26dd122d68
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-cpw2r    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         47s
	  kube-system                 kindnet-jmlw2              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      91s
	  kube-system                 kube-proxy-9fb6t           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         91s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 79s                kube-proxy       
	  Normal  NodeHasSufficientMemory  91s (x2 over 91s)  kubelet          Node multinode-600000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    91s (x2 over 91s)  kubelet          Node multinode-600000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     91s (x2 over 91s)  kubelet          Node multinode-600000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  91s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           88s                node-controller  Node multinode-600000-m02 event: Registered Node multinode-600000-m02 in Controller
	  Normal  NodeReady                71s                kubelet          Node multinode-600000-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[May 7 19:32] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +0.175803] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[May 7 19:33] systemd-fstab-generator[951]: Ignoring "noauto" option for root device
	[  +0.097535] kauditd_printk_skb: 59 callbacks suppressed
	[  +0.454954] systemd-fstab-generator[986]: Ignoring "noauto" option for root device
	[  +0.168613] systemd-fstab-generator[998]: Ignoring "noauto" option for root device
	[  +0.191333] systemd-fstab-generator[1012]: Ignoring "noauto" option for root device
	[  +2.721133] systemd-fstab-generator[1182]: Ignoring "noauto" option for root device
	[  +0.177492] systemd-fstab-generator[1194]: Ignoring "noauto" option for root device
	[  +0.182533] systemd-fstab-generator[1206]: Ignoring "noauto" option for root device
	[  +0.242911] systemd-fstab-generator[1221]: Ignoring "noauto" option for root device
	[ +11.451024] systemd-fstab-generator[1316]: Ignoring "noauto" option for root device
	[  +0.095279] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.636124] systemd-fstab-generator[1510]: Ignoring "noauto" option for root device
	[  +5.583504] systemd-fstab-generator[1705]: Ignoring "noauto" option for root device
	[  +0.090570] kauditd_printk_skb: 73 callbacks suppressed
	[  +7.538196] systemd-fstab-generator[2115]: Ignoring "noauto" option for root device
	[  +0.118813] kauditd_printk_skb: 62 callbacks suppressed
	[ +14.471019] systemd-fstab-generator[2306]: Ignoring "noauto" option for root device
	[  +0.221235] kauditd_printk_skb: 12 callbacks suppressed
	[May 7 19:34] kauditd_printk_skb: 51 callbacks suppressed
	[May 7 19:37] hrtimer: interrupt took 654841 ns
	[ +11.517913] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [675dcdcafeef] <==
	{"level":"info","ts":"2024-05-07T19:33:39.198493Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aac5eb588ad33a11 received MsgPreVoteResp from aac5eb588ad33a11 at term 1"}
	{"level":"info","ts":"2024-05-07T19:33:39.198637Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aac5eb588ad33a11 became candidate at term 2"}
	{"level":"info","ts":"2024-05-07T19:33:39.198866Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aac5eb588ad33a11 received MsgVoteResp from aac5eb588ad33a11 at term 2"}
	{"level":"info","ts":"2024-05-07T19:33:39.199268Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aac5eb588ad33a11 became leader at term 2"}
	{"level":"info","ts":"2024-05-07T19:33:39.19943Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aac5eb588ad33a11 elected leader aac5eb588ad33a11 at term 2"}
	{"level":"info","ts":"2024-05-07T19:33:39.207227Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-07T19:33:39.207567Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aac5eb588ad33a11","local-member-attributes":"{Name:multinode-600000 ClientURLs:[https://172.19.143.74:2379]}","request-path":"/0/members/aac5eb588ad33a11/attributes","cluster-id":"9263975694bef132","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-07T19:33:39.210024Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-07T19:33:39.210346Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-07T19:33:39.213023Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-07T19:33:39.21306Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-07T19:33:39.213274Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9263975694bef132","local-member-id":"aac5eb588ad33a11","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-07T19:33:39.213526Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-07T19:33:39.216023Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-07T19:33:39.215019Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.19.143.74:2379"}
	{"level":"info","ts":"2024-05-07T19:33:39.220807Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-05-07T19:34:25.158741Z","caller":"traceutil/trace.go:171","msg":"trace[1592418721] transaction","detail":"{read_only:false; response_revision:475; number_of_response:1; }","duration":"137.370734ms","start":"2024-05-07T19:34:25.021354Z","end":"2024-05-07T19:34:25.158725Z","steps":["trace[1592418721] 'process raft request'  (duration: 137.225324ms)"],"step_count":1}
	{"level":"info","ts":"2024-05-07T19:36:50.079152Z","caller":"traceutil/trace.go:171","msg":"trace[1727508685] transaction","detail":"{read_only:false; response_revision:627; number_of_response:1; }","duration":"111.591205ms","start":"2024-05-07T19:36:49.967542Z","end":"2024-05-07T19:36:50.079133Z","steps":["trace[1727508685] 'process raft request'  (duration: 111.074173ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-07T19:36:50.418542Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"190.736901ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-600000-m02\" ","response":"range_response_count:1 size:3149"}
	{"level":"warn","ts":"2024-05-07T19:36:50.419269Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"334.413291ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-600000-m02\" ","response":"range_response_count:1 size:3149"}
	{"level":"info","ts":"2024-05-07T19:36:50.41944Z","caller":"traceutil/trace.go:171","msg":"trace[821316859] range","detail":"{range_begin:/registry/minions/multinode-600000-m02; range_end:; response_count:1; response_revision:627; }","duration":"334.600603ms","start":"2024-05-07T19:36:50.084829Z","end":"2024-05-07T19:36:50.419429Z","steps":["trace[821316859] 'range keys from in-memory index tree'  (duration: 334.344286ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-07T19:36:50.419521Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-05-07T19:36:50.084817Z","time spent":"334.694709ms","remote":"127.0.0.1:57964","response type":"/etcdserverpb.KV/Range","request count":0,"request size":40,"response count":1,"response size":3173,"request content":"key:\"/registry/minions/multinode-600000-m02\" "}
	{"level":"info","ts":"2024-05-07T19:36:50.419242Z","caller":"traceutil/trace.go:171","msg":"trace[1758372899] range","detail":"{range_begin:/registry/minions/multinode-600000-m02; range_end:; response_count:1; response_revision:627; }","duration":"191.469647ms","start":"2024-05-07T19:36:50.227755Z","end":"2024-05-07T19:36:50.419225Z","steps":["trace[1758372899] 'range keys from in-memory index tree'  (duration: 190.680698ms)"],"step_count":1}
	{"level":"warn","ts":"2024-05-07T19:36:50.419039Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"111.419893ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/\" range_end:\"/registry/persistentvolumeclaims0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-05-07T19:36:50.42027Z","caller":"traceutil/trace.go:171","msg":"trace[1502172360] range","detail":"{range_begin:/registry/persistentvolumeclaims/; range_end:/registry/persistentvolumeclaims0; response_count:0; response_revision:627; }","duration":"112.889785ms","start":"2024-05-07T19:36:50.307367Z","end":"2024-05-07T19:36:50.420256Z","steps":["trace[1502172360] 'count revisions from in-memory index tree'  (duration: 111.376191ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:38:10 up 6 min,  0 users,  load average: 0.61, 0.55, 0.27
	Linux multinode-600000 5.10.207 #1 SMP Tue Apr 30 22:38:43 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [2d49ad078ed3] <==
	I0507 19:37:06.897615       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:37:16.904660       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:37:16.904749       1 main.go:227] handling current node
	I0507 19:37:16.904763       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:37:16.904770       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:37:26.913871       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:37:26.914024       1 main.go:227] handling current node
	I0507 19:37:26.914041       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:37:26.914052       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:37:36.920687       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:37:36.920799       1 main.go:227] handling current node
	I0507 19:37:36.920813       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:37:36.920821       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:37:46.930794       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:37:46.930910       1 main.go:227] handling current node
	I0507 19:37:46.930931       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:37:46.930940       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:37:56.937448       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:37:56.937548       1 main.go:227] handling current node
	I0507 19:37:56.937562       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:37:56.937570       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:38:06.944266       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:38:06.944300       1 main.go:227] handling current node
	I0507 19:38:06.944332       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:38:06.944339       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [1ad9d5948325] <==
	I0507 19:33:41.980484       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0507 19:33:41.994473       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0507 19:33:41.994557       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0507 19:33:43.018380       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0507 19:33:43.105670       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0507 19:33:43.234932       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0507 19:33:43.252272       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.19.143.74]
	I0507 19:33:43.253392       1 controller.go:615] quota admission added evaluator for: endpoints
	I0507 19:33:43.270930       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0507 19:33:44.047782       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0507 19:33:44.233605       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0507 19:33:44.274215       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0507 19:33:44.302748       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0507 19:33:58.250873       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0507 19:33:58.349687       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0507 19:37:28.767730       1 conn.go:339] Error on socket receive: read tcp 172.19.143.74:8443->172.19.128.1:52123: use of closed network connection
	E0507 19:37:29.180909       1 conn.go:339] Error on socket receive: read tcp 172.19.143.74:8443->172.19.128.1:52125: use of closed network connection
	E0507 19:37:29.663949       1 conn.go:339] Error on socket receive: read tcp 172.19.143.74:8443->172.19.128.1:52127: use of closed network connection
	E0507 19:37:30.091306       1 conn.go:339] Error on socket receive: read tcp 172.19.143.74:8443->172.19.128.1:52129: use of closed network connection
	E0507 19:37:30.509230       1 conn.go:339] Error on socket receive: read tcp 172.19.143.74:8443->172.19.128.1:52131: use of closed network connection
	E0507 19:37:30.908211       1 conn.go:339] Error on socket receive: read tcp 172.19.143.74:8443->172.19.128.1:52133: use of closed network connection
	E0507 19:37:31.641294       1 conn.go:339] Error on socket receive: read tcp 172.19.143.74:8443->172.19.128.1:52136: use of closed network connection
	E0507 19:37:42.048781       1 conn.go:339] Error on socket receive: read tcp 172.19.143.74:8443->172.19.128.1:52138: use of closed network connection
	E0507 19:37:42.455431       1 conn.go:339] Error on socket receive: read tcp 172.19.143.74:8443->172.19.128.1:52141: use of closed network connection
	E0507 19:37:52.869713       1 conn.go:339] Error on socket receive: read tcp 172.19.143.74:8443->172.19.128.1:52143: use of closed network connection
	
	
	==> kube-controller-manager [3067f16e2e38] <==
	I0507 19:33:58.353634       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0507 19:33:58.648491       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="254.239192ms"
	I0507 19:33:58.768889       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="120.227252ms"
	I0507 19:33:58.768980       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="57.703µs"
	I0507 19:33:59.385629       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="74.4593ms"
	I0507 19:33:59.400563       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="14.850657ms"
	I0507 19:33:59.442803       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="42.020809ms"
	I0507 19:33:59.442937       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="66.204µs"
	I0507 19:34:10.730717       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="75.405µs"
	I0507 19:34:10.778543       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="100.807µs"
	I0507 19:34:12.746728       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0507 19:34:12.843910       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="71.905µs"
	I0507 19:34:12.916087       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="21.128233ms"
	I0507 19:34:12.920189       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="131.008µs"
	I0507 19:36:39.748714       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-600000-m02\" does not exist"
	I0507 19:36:39.768095       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-600000-m02" podCIDRs=["10.244.1.0/24"]
	I0507 19:36:42.771386       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-600000-m02"
	I0507 19:36:59.833069       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-600000-m02"
	I0507 19:37:23.261574       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="80.822997ms"
	I0507 19:37:23.275925       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.242181ms"
	I0507 19:37:23.277411       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.303µs"
	I0507 19:37:25.468822       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.984518ms"
	I0507 19:37:25.471412       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="2.381856ms"
	I0507 19:37:26.028543       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.755438ms"
	I0507 19:37:26.029180       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="91.706µs"
	
	
	==> kube-proxy [aa9692c1fbd3] <==
	I0507 19:33:59.788332       1 server_linux.go:69] "Using iptables proxy"
	I0507 19:33:59.819474       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.19.143.74"]
	I0507 19:33:59.872130       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0507 19:33:59.872292       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0507 19:33:59.872320       1 server_linux.go:165] "Using iptables Proxier"
	I0507 19:33:59.878610       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0507 19:33:59.879634       1 server.go:872] "Version info" version="v1.30.0"
	I0507 19:33:59.879774       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0507 19:33:59.883100       1 config.go:192] "Starting service config controller"
	I0507 19:33:59.884238       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0507 19:33:59.884310       1 config.go:101] "Starting endpoint slice config controller"
	I0507 19:33:59.884544       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0507 19:33:59.886801       1 config.go:319] "Starting node config controller"
	I0507 19:33:59.888528       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0507 19:33:59.985346       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0507 19:33:59.985458       1 shared_informer.go:320] Caches are synced for service config
	I0507 19:33:59.988897       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [7cefdac2050f] <==
	W0507 19:33:42.156561       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0507 19:33:42.157128       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0507 19:33:42.162271       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0507 19:33:42.162599       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0507 19:33:42.229371       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0507 19:33:42.229525       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0507 19:33:42.264429       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0507 19:33:42.264596       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0507 19:33:42.284763       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0507 19:33:42.284872       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0507 19:33:42.338396       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0507 19:33:42.338683       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0507 19:33:42.356861       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0507 19:33:42.356964       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0507 19:33:42.435844       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0507 19:33:42.436739       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0507 19:33:42.446379       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0507 19:33:42.446557       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0507 19:33:42.489593       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0507 19:33:42.489896       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0507 19:33:42.647287       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0507 19:33:42.648065       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0507 19:33:42.657928       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0507 19:33:42.658018       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0507 19:33:43.909008       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	May 07 19:34:10 multinode-600000 kubelet[2122]: I0507 19:34:10.815190    2122 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f5q56\" (UniqueName: \"kubernetes.io/projected/90142b77-53fb-42e1-94f8-7f8a3c7765ac-kube-api-access-f5q56\") pod \"storage-provisioner\" (UID: \"90142b77-53fb-42e1-94f8-7f8a3c7765ac\") " pod="kube-system/storage-provisioner"
	May 07 19:34:12 multinode-600000 kubelet[2122]: I0507 19:34:12.876193    2122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-5j966" podStartSLOduration=14.876168865 podStartE2EDuration="14.876168865s" podCreationTimestamp="2024-05-07 19:33:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-07 19:34:12.844902592 +0000 UTC m=+28.816212944" watchObservedRunningTime="2024-05-07 19:34:12.876168865 +0000 UTC m=+28.847479217"
	May 07 19:34:12 multinode-600000 kubelet[2122]: I0507 19:34:12.894145    2122 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=7.894131498 podStartE2EDuration="7.894131498s" podCreationTimestamp="2024-05-07 19:34:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-07 19:34:12.877089323 +0000 UTC m=+28.848399675" watchObservedRunningTime="2024-05-07 19:34:12.894131498 +0000 UTC m=+28.865441850"
	May 07 19:34:44 multinode-600000 kubelet[2122]: E0507 19:34:44.253789    2122 iptables.go:577] "Could not set up iptables canary" err=<
	May 07 19:34:44 multinode-600000 kubelet[2122]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 07 19:34:44 multinode-600000 kubelet[2122]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 07 19:34:44 multinode-600000 kubelet[2122]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 07 19:34:44 multinode-600000 kubelet[2122]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 07 19:35:44 multinode-600000 kubelet[2122]: E0507 19:35:44.255942    2122 iptables.go:577] "Could not set up iptables canary" err=<
	May 07 19:35:44 multinode-600000 kubelet[2122]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 07 19:35:44 multinode-600000 kubelet[2122]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 07 19:35:44 multinode-600000 kubelet[2122]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 07 19:35:44 multinode-600000 kubelet[2122]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 07 19:36:44 multinode-600000 kubelet[2122]: E0507 19:36:44.255371    2122 iptables.go:577] "Could not set up iptables canary" err=<
	May 07 19:36:44 multinode-600000 kubelet[2122]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 07 19:36:44 multinode-600000 kubelet[2122]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 07 19:36:44 multinode-600000 kubelet[2122]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 07 19:36:44 multinode-600000 kubelet[2122]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 07 19:37:23 multinode-600000 kubelet[2122]: I0507 19:37:23.251128    2122 topology_manager.go:215] "Topology Admit Handler" podUID="d98009ce-3495-481a-86b3-7c1e9422ca5a" podNamespace="default" podName="busybox-fc5497c4f-gcqlv"
	May 07 19:37:23 multinode-600000 kubelet[2122]: I0507 19:37:23.410128    2122 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77z75\" (UniqueName: \"kubernetes.io/projected/d98009ce-3495-481a-86b3-7c1e9422ca5a-kube-api-access-77z75\") pod \"busybox-fc5497c4f-gcqlv\" (UID: \"d98009ce-3495-481a-86b3-7c1e9422ca5a\") " pod="default/busybox-fc5497c4f-gcqlv"
	May 07 19:37:44 multinode-600000 kubelet[2122]: E0507 19:37:44.255776    2122 iptables.go:577] "Could not set up iptables canary" err=<
	May 07 19:37:44 multinode-600000 kubelet[2122]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 07 19:37:44 multinode-600000 kubelet[2122]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 07 19:37:44 multinode-600000 kubelet[2122]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 07 19:37:44 multinode-600000 kubelet[2122]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0507 19:38:03.347621    8392 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-600000 -n multinode-600000
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-600000 -n multinode-600000: (10.5282635s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-600000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (51.62s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (592.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-600000
multinode_test.go:321: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-600000
multinode_test.go:321: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-600000: (1m32.2362696s)
multinode_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-600000 --wait=true -v=8 --alsologtostderr
E0507 19:55:01.446089    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-527400\client.crt: The system cannot find the path specified.
E0507 19:55:23.437166    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\client.crt: The system cannot find the path specified.
E0507 19:58:04.686258    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-527400\client.crt: The system cannot find the path specified.
E0507 20:00:01.461866    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-527400\client.crt: The system cannot find the path specified.
multinode_test.go:326: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-600000 --wait=true -v=8 --alsologtostderr: (7m36.8046972s)
multinode_test.go:331: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-600000
multinode_test.go:338: reported node list is not the same after restart. Before restart: multinode-600000	172.19.143.74
multinode-600000-m02	172.19.143.144
multinode-600000-m03	172.19.129.4

                                                
                                                
After restart: multinode-600000	172.19.135.22
multinode-600000-m02	172.19.128.95
multinode-600000-m03	172.19.142.217
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-600000 -n multinode-600000
E0507 20:00:23.450198    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\client.crt: The system cannot find the path specified.
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-600000 -n multinode-600000: (10.9806506s)
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-600000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-600000 logs -n 25: (12.3726541s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                                           Args                                                           |     Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | multinode-600000 ssh -n                                                                                                  | multinode-600000 | minikube5\jenkins | v1.33.0 | 07 May 24 19:44 UTC | 07 May 24 19:44 UTC |
	|         | multinode-600000-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-600000 cp multinode-600000-m02:/home/docker/cp-test.txt                                                        | multinode-600000 | minikube5\jenkins | v1.33.0 | 07 May 24 19:44 UTC | 07 May 24 19:44 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiNodeserialCopyFile2685173768\001\cp-test_multinode-600000-m02.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-600000 ssh -n                                                                                                  | multinode-600000 | minikube5\jenkins | v1.33.0 | 07 May 24 19:44 UTC | 07 May 24 19:44 UTC |
	|         | multinode-600000-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-600000 cp multinode-600000-m02:/home/docker/cp-test.txt                                                        | multinode-600000 | minikube5\jenkins | v1.33.0 | 07 May 24 19:44 UTC | 07 May 24 19:44 UTC |
	|         | multinode-600000:/home/docker/cp-test_multinode-600000-m02_multinode-600000.txt                                          |                  |                   |         |                     |                     |
	| ssh     | multinode-600000 ssh -n                                                                                                  | multinode-600000 | minikube5\jenkins | v1.33.0 | 07 May 24 19:44 UTC | 07 May 24 19:44 UTC |
	|         | multinode-600000-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-600000 ssh -n multinode-600000 sudo cat                                                                        | multinode-600000 | minikube5\jenkins | v1.33.0 | 07 May 24 19:44 UTC | 07 May 24 19:45 UTC |
	|         | /home/docker/cp-test_multinode-600000-m02_multinode-600000.txt                                                           |                  |                   |         |                     |                     |
	| cp      | multinode-600000 cp multinode-600000-m02:/home/docker/cp-test.txt                                                        | multinode-600000 | minikube5\jenkins | v1.33.0 | 07 May 24 19:45 UTC | 07 May 24 19:45 UTC |
	|         | multinode-600000-m03:/home/docker/cp-test_multinode-600000-m02_multinode-600000-m03.txt                                  |                  |                   |         |                     |                     |
	| ssh     | multinode-600000 ssh -n                                                                                                  | multinode-600000 | minikube5\jenkins | v1.33.0 | 07 May 24 19:45 UTC | 07 May 24 19:45 UTC |
	|         | multinode-600000-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-600000 ssh -n multinode-600000-m03 sudo cat                                                                    | multinode-600000 | minikube5\jenkins | v1.33.0 | 07 May 24 19:45 UTC | 07 May 24 19:45 UTC |
	|         | /home/docker/cp-test_multinode-600000-m02_multinode-600000-m03.txt                                                       |                  |                   |         |                     |                     |
	| cp      | multinode-600000 cp testdata\cp-test.txt                                                                                 | multinode-600000 | minikube5\jenkins | v1.33.0 | 07 May 24 19:45 UTC | 07 May 24 19:45 UTC |
	|         | multinode-600000-m03:/home/docker/cp-test.txt                                                                            |                  |                   |         |                     |                     |
	| ssh     | multinode-600000 ssh -n                                                                                                  | multinode-600000 | minikube5\jenkins | v1.33.0 | 07 May 24 19:45 UTC | 07 May 24 19:45 UTC |
	|         | multinode-600000-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-600000 cp multinode-600000-m03:/home/docker/cp-test.txt                                                        | multinode-600000 | minikube5\jenkins | v1.33.0 | 07 May 24 19:45 UTC | 07 May 24 19:46 UTC |
	|         | C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiNodeserialCopyFile2685173768\001\cp-test_multinode-600000-m03.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-600000 ssh -n                                                                                                  | multinode-600000 | minikube5\jenkins | v1.33.0 | 07 May 24 19:46 UTC | 07 May 24 19:46 UTC |
	|         | multinode-600000-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-600000 cp multinode-600000-m03:/home/docker/cp-test.txt                                                        | multinode-600000 | minikube5\jenkins | v1.33.0 | 07 May 24 19:46 UTC | 07 May 24 19:46 UTC |
	|         | multinode-600000:/home/docker/cp-test_multinode-600000-m03_multinode-600000.txt                                          |                  |                   |         |                     |                     |
	| ssh     | multinode-600000 ssh -n                                                                                                  | multinode-600000 | minikube5\jenkins | v1.33.0 | 07 May 24 19:46 UTC | 07 May 24 19:46 UTC |
	|         | multinode-600000-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-600000 ssh -n multinode-600000 sudo cat                                                                        | multinode-600000 | minikube5\jenkins | v1.33.0 | 07 May 24 19:46 UTC | 07 May 24 19:46 UTC |
	|         | /home/docker/cp-test_multinode-600000-m03_multinode-600000.txt                                                           |                  |                   |         |                     |                     |
	| cp      | multinode-600000 cp multinode-600000-m03:/home/docker/cp-test.txt                                                        | multinode-600000 | minikube5\jenkins | v1.33.0 | 07 May 24 19:46 UTC | 07 May 24 19:46 UTC |
	|         | multinode-600000-m02:/home/docker/cp-test_multinode-600000-m03_multinode-600000-m02.txt                                  |                  |                   |         |                     |                     |
	| ssh     | multinode-600000 ssh -n                                                                                                  | multinode-600000 | minikube5\jenkins | v1.33.0 | 07 May 24 19:46 UTC | 07 May 24 19:47 UTC |
	|         | multinode-600000-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-600000 ssh -n multinode-600000-m02 sudo cat                                                                    | multinode-600000 | minikube5\jenkins | v1.33.0 | 07 May 24 19:47 UTC | 07 May 24 19:47 UTC |
	|         | /home/docker/cp-test_multinode-600000-m03_multinode-600000-m02.txt                                                       |                  |                   |         |                     |                     |
	| node    | multinode-600000 node stop m03                                                                                           | multinode-600000 | minikube5\jenkins | v1.33.0 | 07 May 24 19:47 UTC | 07 May 24 19:47 UTC |
	| node    | multinode-600000 node start                                                                                              | multinode-600000 | minikube5\jenkins | v1.33.0 | 07 May 24 19:48 UTC | 07 May 24 19:50 UTC |
	|         | m03 -v=7 --alsologtostderr                                                                                               |                  |                   |         |                     |                     |
	| node    | list -p multinode-600000                                                                                                 | multinode-600000 | minikube5\jenkins | v1.33.0 | 07 May 24 19:51 UTC |                     |
	| stop    | -p multinode-600000                                                                                                      | multinode-600000 | minikube5\jenkins | v1.33.0 | 07 May 24 19:51 UTC | 07 May 24 19:52 UTC |
	| start   | -p multinode-600000                                                                                                      | multinode-600000 | minikube5\jenkins | v1.33.0 | 07 May 24 19:52 UTC | 07 May 24 20:00 UTC |
	|         | --wait=true -v=8                                                                                                         |                  |                   |         |                     |                     |
	|         | --alsologtostderr                                                                                                        |                  |                   |         |                     |                     |
	| node    | list -p multinode-600000                                                                                                 | multinode-600000 | minikube5\jenkins | v1.33.0 | 07 May 24 20:00 UTC |                     |
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/07 19:52:37
	Running on machine: minikube5
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0507 19:52:37.259841    5068 out.go:291] Setting OutFile to fd 892 ...
	I0507 19:52:37.259841    5068 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 19:52:37.259841    5068 out.go:304] Setting ErrFile to fd 1008...
	I0507 19:52:37.259841    5068 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 19:52:37.277769    5068 out.go:298] Setting JSON to false
	I0507 19:52:37.281156    5068 start.go:129] hostinfo: {"hostname":"minikube5","uptime":27475,"bootTime":1715084082,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0507 19:52:37.281296    5068 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0507 19:52:37.487999    5068 out.go:177] * [multinode-600000] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0507 19:52:37.542112    5068 notify.go:220] Checking for updates...
	I0507 19:52:37.655932    5068 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0507 19:52:37.740937    5068 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0507 19:52:37.850247    5068 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0507 19:52:38.001091    5068 out.go:177]   - MINIKUBE_LOCATION=18804
	I0507 19:52:38.202458    5068 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0507 19:52:38.217149    5068 config.go:182] Loaded profile config "multinode-600000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 19:52:38.217631    5068 driver.go:392] Setting default libvirt URI to qemu:///system
	I0507 19:52:43.102839    5068 out.go:177] * Using the hyperv driver based on existing profile
	I0507 19:52:43.259709    5068 start.go:297] selected driver: hyperv
	I0507 19:52:43.259709    5068 start.go:901] validating driver "hyperv" against &{Name:multinode-600000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.30.0 ClusterName:multinode-600000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.143.74 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.143.144 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.19.129.4 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:fa
lse ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker B
inaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0507 19:52:43.260903    5068 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0507 19:52:43.305285    5068 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0507 19:52:43.305285    5068 cni.go:84] Creating CNI manager for ""
	I0507 19:52:43.305285    5068 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0507 19:52:43.305826    5068 start.go:340] cluster config:
	{Name:multinode-600000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:multinode-600000 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.143.74 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.143.144 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.19.129.4 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner
:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0507 19:52:43.305932    5068 iso.go:125] acquiring lock: {Name:mk4977609d05da04fcecf95837b3381fb1950afd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0507 19:52:43.446360    5068 out.go:177] * Starting "multinode-600000" primary control-plane node in "multinode-600000" cluster
	I0507 19:52:43.541094    5068 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0507 19:52:43.542083    5068 preload.go:147] Found local preload: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0507 19:52:43.542083    5068 cache.go:56] Caching tarball of preloaded images
	I0507 19:52:43.542182    5068 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0507 19:52:43.542182    5068 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0507 19:52:43.542182    5068 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-600000\config.json ...
	I0507 19:52:43.545187    5068 start.go:360] acquireMachinesLock for multinode-600000: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 19:52:43.545643    5068 start.go:364] duration metric: took 355.2µs to acquireMachinesLock for "multinode-600000"
	I0507 19:52:43.545844    5068 start.go:96] Skipping create...Using existing machine configuration
	I0507 19:52:43.545844    5068 fix.go:54] fixHost starting: 
	I0507 19:52:43.546401    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000 ).state
	I0507 19:52:45.982438    5068 main.go:141] libmachine: [stdout =====>] : Off
	
	I0507 19:52:45.982657    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:52:45.982727    5068 fix.go:112] recreateIfNeeded on multinode-600000: state=Stopped err=<nil>
	W0507 19:52:45.982727    5068 fix.go:138] unexpected machine state, will restart: <nil>
	I0507 19:52:46.046339    5068 out.go:177] * Restarting existing hyperv VM for "multinode-600000" ...
	I0507 19:52:46.097924    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-600000
	I0507 19:52:48.829149    5068 main.go:141] libmachine: [stdout =====>] : 
	I0507 19:52:48.829149    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:52:48.829149    5068 main.go:141] libmachine: Waiting for host to start...
	I0507 19:52:48.830225    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000 ).state
	I0507 19:52:50.799928    5068 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:52:50.799979    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:52:50.800081    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000 ).networkadapters[0]).ipaddresses[0]
	I0507 19:52:52.997525    5068 main.go:141] libmachine: [stdout =====>] : 
	I0507 19:52:52.997525    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:52:54.003451    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000 ).state
	I0507 19:52:55.946692    5068 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:52:55.946766    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:52:55.946825    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000 ).networkadapters[0]).ipaddresses[0]
	I0507 19:52:58.127777    5068 main.go:141] libmachine: [stdout =====>] : 
	I0507 19:52:58.127777    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:52:59.130397    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000 ).state
	I0507 19:53:01.092538    5068 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:53:01.092538    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:53:01.092538    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000 ).networkadapters[0]).ipaddresses[0]
	I0507 19:53:03.316599    5068 main.go:141] libmachine: [stdout =====>] : 
	I0507 19:53:03.316599    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:53:04.328592    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000 ).state
	I0507 19:53:06.257974    5068 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:53:06.257974    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:53:06.258053    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000 ).networkadapters[0]).ipaddresses[0]
	I0507 19:53:08.480695    5068 main.go:141] libmachine: [stdout =====>] : 
	I0507 19:53:08.481068    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:53:09.486353    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000 ).state
	I0507 19:53:11.415127    5068 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:53:11.415864    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:53:11.415921    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000 ).networkadapters[0]).ipaddresses[0]
	I0507 19:53:13.713200    5068 main.go:141] libmachine: [stdout =====>] : 172.19.135.22
	
	I0507 19:53:13.713200    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:53:13.715537    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000 ).state
	I0507 19:53:15.581101    5068 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:53:15.581101    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:53:15.581101    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000 ).networkadapters[0]).ipaddresses[0]
	I0507 19:53:17.815858    5068 main.go:141] libmachine: [stdout =====>] : 172.19.135.22
	
	I0507 19:53:17.816799    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:53:17.816992    5068 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-600000\config.json ...
	I0507 19:53:17.818793    5068 machine.go:94] provisionDockerMachine start ...
	I0507 19:53:17.818866    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000 ).state
	I0507 19:53:19.676790    5068 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:53:19.676790    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:53:19.676876    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000 ).networkadapters[0]).ipaddresses[0]
	I0507 19:53:21.909205    5068 main.go:141] libmachine: [stdout =====>] : 172.19.135.22
	
	I0507 19:53:21.909226    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:53:21.912318    5068 main.go:141] libmachine: Using SSH client type: native
	I0507 19:53:21.912894    5068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.135.22 22 <nil> <nil>}
	I0507 19:53:21.912894    5068 main.go:141] libmachine: About to run SSH command:
	hostname
	I0507 19:53:22.037709    5068 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0507 19:53:22.037786    5068 buildroot.go:166] provisioning hostname "multinode-600000"
	I0507 19:53:22.037867    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000 ).state
	I0507 19:53:23.908493    5068 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:53:23.908493    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:53:23.909678    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000 ).networkadapters[0]).ipaddresses[0]
	I0507 19:53:26.124988    5068 main.go:141] libmachine: [stdout =====>] : 172.19.135.22
	
	I0507 19:53:26.125249    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:53:26.128717    5068 main.go:141] libmachine: Using SSH client type: native
	I0507 19:53:26.128757    5068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.135.22 22 <nil> <nil>}
	I0507 19:53:26.128757    5068 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-600000 && echo "multinode-600000" | sudo tee /etc/hostname
	I0507 19:53:26.281074    5068 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-600000
	
	I0507 19:53:26.281074    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000 ).state
	I0507 19:53:28.156247    5068 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:53:28.156247    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:53:28.156577    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000 ).networkadapters[0]).ipaddresses[0]
	I0507 19:53:30.388199    5068 main.go:141] libmachine: [stdout =====>] : 172.19.135.22
	
	I0507 19:53:30.388199    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:53:30.392352    5068 main.go:141] libmachine: Using SSH client type: native
	I0507 19:53:30.393040    5068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.135.22 22 <nil> <nil>}
	I0507 19:53:30.393040    5068 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-600000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-600000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-600000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0507 19:53:30.548973    5068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0507 19:53:30.548973    5068 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0507 19:53:30.548973    5068 buildroot.go:174] setting up certificates
	I0507 19:53:30.548973    5068 provision.go:84] configureAuth start
	I0507 19:53:30.548973    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000 ).state
	I0507 19:53:32.402926    5068 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:53:32.402926    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:53:32.403097    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000 ).networkadapters[0]).ipaddresses[0]
	I0507 19:53:34.594769    5068 main.go:141] libmachine: [stdout =====>] : 172.19.135.22
	
	I0507 19:53:34.595199    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:53:34.595331    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000 ).state
	I0507 19:53:36.414761    5068 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:53:36.414761    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:53:36.414761    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000 ).networkadapters[0]).ipaddresses[0]
	I0507 19:53:38.631484    5068 main.go:141] libmachine: [stdout =====>] : 172.19.135.22
	
	I0507 19:53:38.631484    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:53:38.632050    5068 provision.go:143] copyHostCerts
	I0507 19:53:38.632131    5068 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0507 19:53:38.632421    5068 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0507 19:53:38.632421    5068 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0507 19:53:38.632890    5068 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0507 19:53:38.633745    5068 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0507 19:53:38.633907    5068 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0507 19:53:38.633997    5068 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0507 19:53:38.634191    5068 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0507 19:53:38.635013    5068 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0507 19:53:38.635153    5068 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0507 19:53:38.635153    5068 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0507 19:53:38.635394    5068 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0507 19:53:38.636034    5068 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-600000 san=[127.0.0.1 172.19.135.22 localhost minikube multinode-600000]
	I0507 19:53:38.767538    5068 provision.go:177] copyRemoteCerts
	I0507 19:53:38.774547    5068 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0507 19:53:38.774547    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000 ).state
	I0507 19:53:40.613059    5068 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:53:40.613059    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:53:40.613059    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000 ).networkadapters[0]).ipaddresses[0]
	I0507 19:53:42.796708    5068 main.go:141] libmachine: [stdout =====>] : 172.19.135.22
	
	I0507 19:53:42.796708    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:53:42.797823    5068 sshutil.go:53] new ssh client: &{IP:172.19.135.22 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-600000\id_rsa Username:docker}
	I0507 19:53:42.905868    5068 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.1308885s)
	I0507 19:53:42.905868    5068 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0507 19:53:42.906460    5068 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0507 19:53:42.948354    5068 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0507 19:53:42.948354    5068 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0507 19:53:42.990040    5068 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0507 19:53:42.990040    5068 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0507 19:53:43.030045    5068 provision.go:87] duration metric: took 12.4801636s to configureAuth
	I0507 19:53:43.030117    5068 buildroot.go:189] setting minikube options for container-runtime
	I0507 19:53:43.031072    5068 config.go:182] Loaded profile config "multinode-600000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 19:53:43.031200    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000 ).state
	I0507 19:53:44.886287    5068 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:53:44.886287    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:53:44.886287    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000 ).networkadapters[0]).ipaddresses[0]
	I0507 19:53:47.048412    5068 main.go:141] libmachine: [stdout =====>] : 172.19.135.22
	
	I0507 19:53:47.048412    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:53:47.052806    5068 main.go:141] libmachine: Using SSH client type: native
	I0507 19:53:47.052934    5068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.135.22 22 <nil> <nil>}
	I0507 19:53:47.052934    5068 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0507 19:53:47.182450    5068 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0507 19:53:47.182450    5068 buildroot.go:70] root file system type: tmpfs
	I0507 19:53:47.182607    5068 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0507 19:53:47.182691    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000 ).state
	I0507 19:53:49.021469    5068 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:53:49.021469    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:53:49.021908    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000 ).networkadapters[0]).ipaddresses[0]
	I0507 19:53:51.224039    5068 main.go:141] libmachine: [stdout =====>] : 172.19.135.22
	
	I0507 19:53:51.224039    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:53:51.229461    5068 main.go:141] libmachine: Using SSH client type: native
	I0507 19:53:51.229894    5068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.135.22 22 <nil> <nil>}
	I0507 19:53:51.229894    5068 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0507 19:53:51.373698    5068 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0507 19:53:51.373807    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000 ).state
	I0507 19:53:53.246659    5068 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:53:53.246659    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:53:53.247325    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000 ).networkadapters[0]).ipaddresses[0]
	I0507 19:53:55.477493    5068 main.go:141] libmachine: [stdout =====>] : 172.19.135.22
	
	I0507 19:53:55.477578    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:53:55.482970    5068 main.go:141] libmachine: Using SSH client type: native
	I0507 19:53:55.483492    5068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.135.22 22 <nil> <nil>}
	I0507 19:53:55.483650    5068 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0507 19:53:57.793694    5068 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0507 19:53:57.793694    5068 machine.go:97] duration metric: took 39.9722264s to provisionDockerMachine
	I0507 19:53:57.794239    5068 start.go:293] postStartSetup for "multinode-600000" (driver="hyperv")
	I0507 19:53:57.794239    5068 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0507 19:53:57.804472    5068 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0507 19:53:57.804472    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000 ).state
	I0507 19:53:59.661188    5068 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:53:59.662226    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:53:59.662292    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000 ).networkadapters[0]).ipaddresses[0]
	I0507 19:54:01.887225    5068 main.go:141] libmachine: [stdout =====>] : 172.19.135.22
	
	I0507 19:54:01.887225    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:54:01.888363    5068 sshutil.go:53] new ssh client: &{IP:172.19.135.22 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-600000\id_rsa Username:docker}
	I0507 19:54:01.984731    5068 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.179987s)
	I0507 19:54:01.994362    5068 ssh_runner.go:195] Run: cat /etc/os-release
	I0507 19:54:02.001473    5068 command_runner.go:130] > NAME=Buildroot
	I0507 19:54:02.001473    5068 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0507 19:54:02.001582    5068 command_runner.go:130] > ID=buildroot
	I0507 19:54:02.001582    5068 command_runner.go:130] > VERSION_ID=2023.02.9
	I0507 19:54:02.001639    5068 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0507 19:54:02.001697    5068 info.go:137] Remote host: Buildroot 2023.02.9
	I0507 19:54:02.001697    5068 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0507 19:54:02.001697    5068 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0507 19:54:02.002876    5068 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem -> 99922.pem in /etc/ssl/certs
	I0507 19:54:02.002943    5068 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem -> /etc/ssl/certs/99922.pem
	I0507 19:54:02.012887    5068 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0507 19:54:02.028507    5068 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem --> /etc/ssl/certs/99922.pem (1708 bytes)
	I0507 19:54:02.072408    5068 start.go:296] duration metric: took 4.2778918s for postStartSetup
	I0507 19:54:02.072408    5068 fix.go:56] duration metric: took 1m18.5214413s for fixHost
	I0507 19:54:02.072408    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000 ).state
	I0507 19:54:03.934499    5068 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:54:03.934499    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:54:03.935064    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000 ).networkadapters[0]).ipaddresses[0]
	I0507 19:54:06.147048    5068 main.go:141] libmachine: [stdout =====>] : 172.19.135.22
	
	I0507 19:54:06.148048    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:54:06.152936    5068 main.go:141] libmachine: Using SSH client type: native
	I0507 19:54:06.153458    5068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.135.22 22 <nil> <nil>}
	I0507 19:54:06.153528    5068 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0507 19:54:06.293394    5068 main.go:141] libmachine: SSH cmd err, output: <nil>: 1715111646.532290622
	
	I0507 19:54:06.293490    5068 fix.go:216] guest clock: 1715111646.532290622
	I0507 19:54:06.293490    5068 fix.go:229] Guest: 2024-05-07 19:54:06.532290622 +0000 UTC Remote: 2024-05-07 19:54:02.0724085 +0000 UTC m=+84.927567301 (delta=4.459882122s)
	I0507 19:54:06.293490    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000 ).state
	I0507 19:54:08.132876    5068 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:54:08.133685    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:54:08.133762    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000 ).networkadapters[0]).ipaddresses[0]
	I0507 19:54:10.351769    5068 main.go:141] libmachine: [stdout =====>] : 172.19.135.22
	
	I0507 19:54:10.351769    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:54:10.354855    5068 main.go:141] libmachine: Using SSH client type: native
	I0507 19:54:10.354855    5068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.135.22 22 <nil> <nil>}
	I0507 19:54:10.354855    5068 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1715111646
	I0507 19:54:10.503984    5068 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue May  7 19:54:06 UTC 2024
	
	I0507 19:54:10.503984    5068 fix.go:236] clock set: Tue May  7 19:54:06 UTC 2024
	 (err=<nil>)
	I0507 19:54:10.503984    5068 start.go:83] releasing machines lock for "multinode-600000", held for 1m26.9526707s
	I0507 19:54:10.503984    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000 ).state
	I0507 19:54:12.337158    5068 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:54:12.337268    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:54:12.337268    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000 ).networkadapters[0]).ipaddresses[0]
	I0507 19:54:14.551868    5068 main.go:141] libmachine: [stdout =====>] : 172.19.135.22
	
	I0507 19:54:14.552163    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:54:14.555004    5068 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0507 19:54:14.555004    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000 ).state
	I0507 19:54:14.562123    5068 ssh_runner.go:195] Run: cat /version.json
	I0507 19:54:14.562123    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000 ).state
	I0507 19:54:16.493075    5068 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:54:16.493075    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:54:16.493997    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000 ).networkadapters[0]).ipaddresses[0]
	I0507 19:54:16.494106    5068 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:54:16.494106    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:54:16.494106    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000 ).networkadapters[0]).ipaddresses[0]
	I0507 19:54:18.768970    5068 main.go:141] libmachine: [stdout =====>] : 172.19.135.22
	
	I0507 19:54:18.768970    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:54:18.769504    5068 sshutil.go:53] new ssh client: &{IP:172.19.135.22 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-600000\id_rsa Username:docker}
	I0507 19:54:18.789756    5068 main.go:141] libmachine: [stdout =====>] : 172.19.135.22
	
	I0507 19:54:18.789756    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:54:18.790806    5068 sshutil.go:53] new ssh client: &{IP:172.19.135.22 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-600000\id_rsa Username:docker}
	I0507 19:54:18.857027    5068 command_runner.go:130] > {"iso_version": "v1.33.0-1714498396-18779", "kicbase_version": "v0.0.43-1714386659-18769", "minikube_version": "v1.33.0", "commit": "0c7995ab2d4914d5c74027eee5f5d102e19316f2"}
	I0507 19:54:18.857446    5068 ssh_runner.go:235] Completed: cat /version.json: (4.294626s)
	I0507 19:54:18.866737    5068 ssh_runner.go:195] Run: systemctl --version
	I0507 19:54:18.971224    5068 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0507 19:54:18.971224    5068 command_runner.go:130] > systemd 252 (252)
	I0507 19:54:18.971332    5068 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0507 19:54:18.971332    5068 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.4159349s)
	I0507 19:54:18.980688    5068 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0507 19:54:18.988645    5068 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0507 19:54:18.988645    5068 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0507 19:54:18.997696    5068 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0507 19:54:19.024346    5068 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0507 19:54:19.024508    5068 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0507 19:54:19.024605    5068 start.go:494] detecting cgroup driver to use...
	I0507 19:54:19.024652    5068 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0507 19:54:19.055056    5068 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0507 19:54:19.067689    5068 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0507 19:54:19.095138    5068 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0507 19:54:19.112066    5068 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0507 19:54:19.124729    5068 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0507 19:54:19.155000    5068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0507 19:54:19.182763    5068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0507 19:54:19.209572    5068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0507 19:54:19.236157    5068 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0507 19:54:19.273186    5068 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0507 19:54:19.301141    5068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0507 19:54:19.328732    5068 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0507 19:54:19.356188    5068 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0507 19:54:19.371896    5068 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0507 19:54:19.381338    5068 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0507 19:54:19.408895    5068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 19:54:19.583824    5068 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0507 19:54:19.608954    5068 start.go:494] detecting cgroup driver to use...
	I0507 19:54:19.622191    5068 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0507 19:54:19.641066    5068 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0507 19:54:19.641066    5068 command_runner.go:130] > [Unit]
	I0507 19:54:19.641066    5068 command_runner.go:130] > Description=Docker Application Container Engine
	I0507 19:54:19.641066    5068 command_runner.go:130] > Documentation=https://docs.docker.com
	I0507 19:54:19.641066    5068 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0507 19:54:19.641066    5068 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0507 19:54:19.641066    5068 command_runner.go:130] > StartLimitBurst=3
	I0507 19:54:19.641066    5068 command_runner.go:130] > StartLimitIntervalSec=60
	I0507 19:54:19.641066    5068 command_runner.go:130] > [Service]
	I0507 19:54:19.641066    5068 command_runner.go:130] > Type=notify
	I0507 19:54:19.641066    5068 command_runner.go:130] > Restart=on-failure
	I0507 19:54:19.641066    5068 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0507 19:54:19.641066    5068 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0507 19:54:19.641066    5068 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0507 19:54:19.641066    5068 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0507 19:54:19.641066    5068 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0507 19:54:19.641066    5068 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0507 19:54:19.641066    5068 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0507 19:54:19.641066    5068 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0507 19:54:19.641066    5068 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0507 19:54:19.641066    5068 command_runner.go:130] > ExecStart=
	I0507 19:54:19.641595    5068 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0507 19:54:19.641634    5068 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0507 19:54:19.641673    5068 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0507 19:54:19.641707    5068 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0507 19:54:19.641730    5068 command_runner.go:130] > LimitNOFILE=infinity
	I0507 19:54:19.641767    5068 command_runner.go:130] > LimitNPROC=infinity
	I0507 19:54:19.641767    5068 command_runner.go:130] > LimitCORE=infinity
	I0507 19:54:19.641767    5068 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0507 19:54:19.641809    5068 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0507 19:54:19.641809    5068 command_runner.go:130] > TasksMax=infinity
	I0507 19:54:19.641809    5068 command_runner.go:130] > TimeoutStartSec=0
	I0507 19:54:19.641809    5068 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0507 19:54:19.641863    5068 command_runner.go:130] > Delegate=yes
	I0507 19:54:19.641863    5068 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0507 19:54:19.641906    5068 command_runner.go:130] > KillMode=process
	I0507 19:54:19.641906    5068 command_runner.go:130] > [Install]
	I0507 19:54:19.641906    5068 command_runner.go:130] > WantedBy=multi-user.target
	I0507 19:54:19.652606    5068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0507 19:54:19.681205    5068 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0507 19:54:19.714663    5068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0507 19:54:19.745792    5068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0507 19:54:19.780991    5068 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0507 19:54:19.835684    5068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0507 19:54:19.857319    5068 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0507 19:54:19.888879    5068 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0507 19:54:19.898399    5068 ssh_runner.go:195] Run: which cri-dockerd
	I0507 19:54:19.906798    5068 command_runner.go:130] > /usr/bin/cri-dockerd
	I0507 19:54:19.914923    5068 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0507 19:54:19.931022    5068 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0507 19:54:19.968592    5068 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0507 19:54:20.149453    5068 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0507 19:54:20.307327    5068 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0507 19:54:20.307694    5068 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0507 19:54:20.349707    5068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 19:54:20.512518    5068 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0507 19:54:23.131801    5068 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6189795s)
	I0507 19:54:23.142900    5068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0507 19:54:23.175076    5068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0507 19:54:23.206658    5068 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0507 19:54:23.397345    5068 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0507 19:54:23.563459    5068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 19:54:23.740354    5068 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0507 19:54:23.774310    5068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0507 19:54:23.801583    5068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 19:54:23.973102    5068 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0507 19:54:24.069040    5068 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0507 19:54:24.078731    5068 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0507 19:54:24.092760    5068 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0507 19:54:24.092760    5068 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0507 19:54:24.093283    5068 command_runner.go:130] > Device: 0,22	Inode: 850         Links: 1
	I0507 19:54:24.093283    5068 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0507 19:54:24.093283    5068 command_runner.go:130] > Access: 2024-05-07 19:54:24.237329816 +0000
	I0507 19:54:24.093283    5068 command_runner.go:130] > Modify: 2024-05-07 19:54:24.237329816 +0000
	I0507 19:54:24.093283    5068 command_runner.go:130] > Change: 2024-05-07 19:54:24.240329986 +0000
	I0507 19:54:24.093283    5068 command_runner.go:130] >  Birth: -
	I0507 19:54:24.093283    5068 start.go:562] Will wait 60s for crictl version
	I0507 19:54:24.101134    5068 ssh_runner.go:195] Run: which crictl
	I0507 19:54:24.107393    5068 command_runner.go:130] > /usr/bin/crictl
	I0507 19:54:24.118078    5068 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0507 19:54:24.175955    5068 command_runner.go:130] > Version:  0.1.0
	I0507 19:54:24.175955    5068 command_runner.go:130] > RuntimeName:  docker
	I0507 19:54:24.175955    5068 command_runner.go:130] > RuntimeVersion:  26.0.2
	I0507 19:54:24.175955    5068 command_runner.go:130] > RuntimeApiVersion:  v1
	I0507 19:54:24.177615    5068 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0507 19:54:24.185646    5068 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0507 19:54:24.212996    5068 command_runner.go:130] > 26.0.2
	I0507 19:54:24.225377    5068 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0507 19:54:24.252292    5068 command_runner.go:130] > 26.0.2
	I0507 19:54:24.256583    5068 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0507 19:54:24.256825    5068 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0507 19:54:24.260588    5068 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0507 19:54:24.260588    5068 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0507 19:54:24.260588    5068 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0507 19:54:24.260588    5068 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:a3:a5:4f Flags:up|broadcast|multicast|running}
	I0507 19:54:24.263407    5068 ip.go:210] interface addr: fe80::1edb:f5fd:c218:d8d2/64
	I0507 19:54:24.263407    5068 ip.go:210] interface addr: 172.19.128.1/20
	I0507 19:54:24.271406    5068 ssh_runner.go:195] Run: grep 172.19.128.1	host.minikube.internal$ /etc/hosts
	I0507 19:54:24.276449    5068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.128.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0507 19:54:24.296245    5068 kubeadm.go:877] updating cluster {Name:multinode-600000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.30.0 ClusterName:multinode-600000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.135.22 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.143.144 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.19.129.4 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress
-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0507 19:54:24.298312    5068 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0507 19:54:24.307371    5068 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0507 19:54:24.327312    5068 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.0
	I0507 19:54:24.327312    5068 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.0
	I0507 19:54:24.327312    5068 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.0
	I0507 19:54:24.327312    5068 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.0
	I0507 19:54:24.327312    5068 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0507 19:54:24.327312    5068 command_runner.go:130] > kindest/kindnetd:v20240202-8f1494ea
	I0507 19:54:24.327312    5068 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0507 19:54:24.327312    5068 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0507 19:54:24.327312    5068 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0507 19:54:24.327312    5068 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0507 19:54:24.327312    5068 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	kindest/kindnetd:v20240202-8f1494ea
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0507 19:54:24.327312    5068 docker.go:615] Images already preloaded, skipping extraction
	I0507 19:54:24.336808    5068 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0507 19:54:24.358252    5068 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.0
	I0507 19:54:24.358252    5068 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.0
	I0507 19:54:24.358252    5068 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.0
	I0507 19:54:24.358252    5068 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.0
	I0507 19:54:24.358252    5068 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0507 19:54:24.358252    5068 command_runner.go:130] > kindest/kindnetd:v20240202-8f1494ea
	I0507 19:54:24.358252    5068 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0507 19:54:24.358252    5068 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0507 19:54:24.358252    5068 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0507 19:54:24.358252    5068 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0507 19:54:24.359335    5068 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	kindest/kindnetd:v20240202-8f1494ea
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0507 19:54:24.359404    5068 cache_images.go:84] Images are preloaded, skipping loading
	I0507 19:54:24.359472    5068 kubeadm.go:928] updating node { 172.19.135.22 8443 v1.30.0 docker true true} ...
	I0507 19:54:24.359681    5068 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-600000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.135.22
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-600000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0507 19:54:24.368871    5068 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0507 19:54:24.398207    5068 command_runner.go:130] > cgroupfs
	I0507 19:54:24.398576    5068 cni.go:84] Creating CNI manager for ""
	I0507 19:54:24.398641    5068 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0507 19:54:24.398704    5068 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0507 19:54:24.398770    5068 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.19.135.22 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-600000 NodeName:multinode-600000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.19.135.22"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.19.135.22 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/
etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0507 19:54:24.399028    5068 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.19.135.22
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-600000"
	  kubeletExtraArgs:
	    node-ip: 172.19.135.22
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.19.135.22"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0507 19:54:24.410945    5068 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0507 19:54:24.428369    5068 command_runner.go:130] > kubeadm
	I0507 19:54:24.428369    5068 command_runner.go:130] > kubectl
	I0507 19:54:24.428369    5068 command_runner.go:130] > kubelet
	I0507 19:54:24.428369    5068 binaries.go:44] Found k8s binaries, skipping transfer
	I0507 19:54:24.439591    5068 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0507 19:54:24.455040    5068 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (316 bytes)
	I0507 19:54:24.481103    5068 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0507 19:54:24.510526    5068 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2161 bytes)
	I0507 19:54:24.551767    5068 ssh_runner.go:195] Run: grep 172.19.135.22	control-plane.minikube.internal$ /etc/hosts
	I0507 19:54:24.557768    5068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.135.22	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0507 19:54:24.584796    5068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 19:54:24.756156    5068 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0507 19:54:24.780182    5068 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-600000 for IP: 172.19.135.22
	I0507 19:54:24.780182    5068 certs.go:194] generating shared ca certs ...
	I0507 19:54:24.780182    5068 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 19:54:24.780182    5068 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0507 19:54:24.780182    5068 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0507 19:54:24.780182    5068 certs.go:256] generating profile certs ...
	I0507 19:54:24.781211    5068 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-600000\client.key
	I0507 19:54:24.781211    5068 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-600000\apiserver.key.dd7893a2
	I0507 19:54:24.781211    5068 crypto.go:68] Generating cert C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-600000\apiserver.crt.dd7893a2 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.19.135.22]
	I0507 19:54:25.324659    5068 crypto.go:156] Writing cert to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-600000\apiserver.crt.dd7893a2 ...
	I0507 19:54:25.324659    5068 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-600000\apiserver.crt.dd7893a2: {Name:mk127e1d8a025ca85e8efb765cb09033477d8260 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 19:54:25.326757    5068 crypto.go:164] Writing key to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-600000\apiserver.key.dd7893a2 ...
	I0507 19:54:25.326875    5068 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-600000\apiserver.key.dd7893a2: {Name:mk6b31bdc424a43bacdc443d1684db2db1535129 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 19:54:25.328221    5068 certs.go:381] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-600000\apiserver.crt.dd7893a2 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-600000\apiserver.crt
	I0507 19:54:25.339466    5068 certs.go:385] copying C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-600000\apiserver.key.dd7893a2 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-600000\apiserver.key
	I0507 19:54:25.340462    5068 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-600000\proxy-client.key
	I0507 19:54:25.340462    5068 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0507 19:54:25.340811    5068 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0507 19:54:25.340811    5068 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0507 19:54:25.340953    5068 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0507 19:54:25.341100    5068 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-600000\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0507 19:54:25.341211    5068 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-600000\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0507 19:54:25.341360    5068 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-600000\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0507 19:54:25.342066    5068 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-600000\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0507 19:54:25.342619    5068 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\9992.pem (1338 bytes)
	W0507 19:54:25.342801    5068 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\9992_empty.pem, impossibly tiny 0 bytes
	I0507 19:54:25.342801    5068 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0507 19:54:25.343121    5068 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0507 19:54:25.343327    5068 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0507 19:54:25.343545    5068 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0507 19:54:25.343752    5068 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem (1708 bytes)
	I0507 19:54:25.343752    5068 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0507 19:54:25.343752    5068 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\9992.pem -> /usr/share/ca-certificates/9992.pem
	I0507 19:54:25.343752    5068 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem -> /usr/share/ca-certificates/99922.pem
	I0507 19:54:25.344857    5068 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0507 19:54:25.386978    5068 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0507 19:54:25.428739    5068 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0507 19:54:25.470012    5068 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0507 19:54:25.510724    5068 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-600000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0507 19:54:25.557534    5068 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-600000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0507 19:54:25.605497    5068 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-600000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0507 19:54:25.651354    5068 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-600000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0507 19:54:25.699494    5068 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0507 19:54:25.738493    5068 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\9992.pem --> /usr/share/ca-certificates/9992.pem (1338 bytes)
	I0507 19:54:25.779740    5068 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem --> /usr/share/ca-certificates/99922.pem (1708 bytes)
	I0507 19:54:25.820503    5068 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0507 19:54:25.859700    5068 ssh_runner.go:195] Run: openssl version
	I0507 19:54:25.867680    5068 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0507 19:54:25.875868    5068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9992.pem && ln -fs /usr/share/ca-certificates/9992.pem /etc/ssl/certs/9992.pem"
	I0507 19:54:25.901365    5068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9992.pem
	I0507 19:54:25.907336    5068 command_runner.go:130] > -rw-r--r-- 1 root root 1338 May  7 18:15 /usr/share/ca-certificates/9992.pem
	I0507 19:54:25.907810    5068 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  7 18:15 /usr/share/ca-certificates/9992.pem
	I0507 19:54:25.918089    5068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9992.pem
	I0507 19:54:25.928898    5068 command_runner.go:130] > 51391683
	I0507 19:54:25.938026    5068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9992.pem /etc/ssl/certs/51391683.0"
	I0507 19:54:25.962800    5068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/99922.pem && ln -fs /usr/share/ca-certificates/99922.pem /etc/ssl/certs/99922.pem"
	I0507 19:54:25.989211    5068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/99922.pem
	I0507 19:54:25.994825    5068 command_runner.go:130] > -rw-r--r-- 1 root root 1708 May  7 18:15 /usr/share/ca-certificates/99922.pem
	I0507 19:54:25.995099    5068 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  7 18:15 /usr/share/ca-certificates/99922.pem
	I0507 19:54:26.004048    5068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/99922.pem
	I0507 19:54:26.011724    5068 command_runner.go:130] > 3ec20f2e
	I0507 19:54:26.020504    5068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/99922.pem /etc/ssl/certs/3ec20f2e.0"
	I0507 19:54:26.045830    5068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0507 19:54:26.069957    5068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0507 19:54:26.077245    5068 command_runner.go:130] > -rw-r--r-- 1 root root 1111 May  7 18:01 /usr/share/ca-certificates/minikubeCA.pem
	I0507 19:54:26.077245    5068 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  7 18:01 /usr/share/ca-certificates/minikubeCA.pem
	I0507 19:54:26.084957    5068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0507 19:54:26.092889    5068 command_runner.go:130] > b5213941
	I0507 19:54:26.100595    5068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0507 19:54:26.124448    5068 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0507 19:54:26.131882    5068 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0507 19:54:26.131882    5068 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0507 19:54:26.131882    5068 command_runner.go:130] > Device: 8,1	Inode: 6290249     Links: 1
	I0507 19:54:26.131882    5068 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0507 19:54:26.131882    5068 command_runner.go:130] > Access: 2024-05-07 19:33:33.451504449 +0000
	I0507 19:54:26.131882    5068 command_runner.go:130] > Modify: 2024-05-07 19:33:33.451504449 +0000
	I0507 19:54:26.131882    5068 command_runner.go:130] > Change: 2024-05-07 19:33:33.451504449 +0000
	I0507 19:54:26.131882    5068 command_runner.go:130] >  Birth: 2024-05-07 19:33:33.451504449 +0000
	I0507 19:54:26.140258    5068 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0507 19:54:26.148579    5068 command_runner.go:130] > Certificate will not expire
	I0507 19:54:26.156461    5068 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0507 19:54:26.164631    5068 command_runner.go:130] > Certificate will not expire
	I0507 19:54:26.172373    5068 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0507 19:54:26.181135    5068 command_runner.go:130] > Certificate will not expire
	I0507 19:54:26.187854    5068 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0507 19:54:26.197088    5068 command_runner.go:130] > Certificate will not expire
	I0507 19:54:26.205096    5068 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0507 19:54:26.214297    5068 command_runner.go:130] > Certificate will not expire
	I0507 19:54:26.223149    5068 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0507 19:54:26.230612    5068 command_runner.go:130] > Certificate will not expire
	I0507 19:54:26.231173    5068 kubeadm.go:391] StartCluster: {Name:multinode-600000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.3
0.0 ClusterName:multinode-600000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.135.22 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.143.144 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.19.129.4 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dn
s:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0507 19:54:26.237698    5068 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0507 19:54:26.266846    5068 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0507 19:54:26.284565    5068 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0507 19:54:26.284565    5068 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0507 19:54:26.284565    5068 command_runner.go:130] > /var/lib/minikube/etcd:
	I0507 19:54:26.284565    5068 command_runner.go:130] > member
	W0507 19:54:26.284565    5068 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0507 19:54:26.284565    5068 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0507 19:54:26.284565    5068 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0507 19:54:26.293560    5068 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0507 19:54:26.309606    5068 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0507 19:54:26.310680    5068 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-600000" does not appear in C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0507 19:54:26.311139    5068 kubeconfig.go:62] C:\Users\jenkins.minikube5\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "multinode-600000" cluster setting kubeconfig missing "multinode-600000" context setting]
	I0507 19:54:26.311646    5068 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 19:54:26.326973    5068 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0507 19:54:26.327887    5068 kapi.go:59] client config for multinode-600000: &rest.Config{Host:"https://172.19.135.22:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-600000/client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-600000/client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData
:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2655b00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0507 19:54:26.329030    5068 cert_rotation.go:137] Starting client certificate rotation controller
	I0507 19:54:26.338946    5068 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0507 19:54:26.355852    5068 command_runner.go:130] > --- /var/tmp/minikube/kubeadm.yaml
	I0507 19:54:26.355852    5068 command_runner.go:130] > +++ /var/tmp/minikube/kubeadm.yaml.new
	I0507 19:54:26.355852    5068 command_runner.go:130] > @@ -1,7 +1,7 @@
	I0507 19:54:26.355852    5068 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0507 19:54:26.355852    5068 command_runner.go:130] >  kind: InitConfiguration
	I0507 19:54:26.355852    5068 command_runner.go:130] >  localAPIEndpoint:
	I0507 19:54:26.355852    5068 command_runner.go:130] > -  advertiseAddress: 172.19.143.74
	I0507 19:54:26.355852    5068 command_runner.go:130] > +  advertiseAddress: 172.19.135.22
	I0507 19:54:26.355852    5068 command_runner.go:130] >    bindPort: 8443
	I0507 19:54:26.355852    5068 command_runner.go:130] >  bootstrapTokens:
	I0507 19:54:26.355852    5068 command_runner.go:130] >    - groups:
	I0507 19:54:26.355852    5068 command_runner.go:130] > @@ -14,13 +14,13 @@
	I0507 19:54:26.355852    5068 command_runner.go:130] >    criSocket: unix:///var/run/cri-dockerd.sock
	I0507 19:54:26.355852    5068 command_runner.go:130] >    name: "multinode-600000"
	I0507 19:54:26.355852    5068 command_runner.go:130] >    kubeletExtraArgs:
	I0507 19:54:26.355852    5068 command_runner.go:130] > -    node-ip: 172.19.143.74
	I0507 19:54:26.355852    5068 command_runner.go:130] > +    node-ip: 172.19.135.22
	I0507 19:54:26.355852    5068 command_runner.go:130] >    taints: []
	I0507 19:54:26.355852    5068 command_runner.go:130] >  ---
	I0507 19:54:26.355852    5068 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta3
	I0507 19:54:26.355852    5068 command_runner.go:130] >  kind: ClusterConfiguration
	I0507 19:54:26.355852    5068 command_runner.go:130] >  apiServer:
	I0507 19:54:26.355852    5068 command_runner.go:130] > -  certSANs: ["127.0.0.1", "localhost", "172.19.143.74"]
	I0507 19:54:26.355852    5068 command_runner.go:130] > +  certSANs: ["127.0.0.1", "localhost", "172.19.135.22"]
	I0507 19:54:26.355852    5068 command_runner.go:130] >    extraArgs:
	I0507 19:54:26.355852    5068 command_runner.go:130] >      enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	I0507 19:54:26.355852    5068 command_runner.go:130] >  controllerManager:
	I0507 19:54:26.355852    5068 kubeadm.go:634] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.19.143.74
	+  advertiseAddress: 172.19.135.22
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -14,13 +14,13 @@
	   criSocket: unix:///var/run/cri-dockerd.sock
	   name: "multinode-600000"
	   kubeletExtraArgs:
	-    node-ip: 172.19.143.74
	+    node-ip: 172.19.135.22
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta3
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.19.143.74"]
	+  certSANs: ["127.0.0.1", "localhost", "172.19.135.22"]
	   extraArgs:
	     enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	
	-- /stdout --
	I0507 19:54:26.355852    5068 kubeadm.go:1154] stopping kube-system containers ...
	I0507 19:54:26.363710    5068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0507 19:54:26.387961    5068 command_runner.go:130] > 9550b237d8d7
	I0507 19:54:26.388677    5068 command_runner.go:130] > 232351adf489
	I0507 19:54:26.388677    5068 command_runner.go:130] > 99af61c6e282
	I0507 19:54:26.388677    5068 command_runner.go:130] > 57950c0fdcbe
	I0507 19:54:26.388677    5068 command_runner.go:130] > 2d49ad078ed3
	I0507 19:54:26.388677    5068 command_runner.go:130] > aa9692c1fbd3
	I0507 19:54:26.388677    5068 command_runner.go:130] > 70cff02905e8
	I0507 19:54:26.388677    5068 command_runner.go:130] > 58ebd877d77f
	I0507 19:54:26.388677    5068 command_runner.go:130] > 1ad9d5948325
	I0507 19:54:26.388677    5068 command_runner.go:130] > 7cefdac2050f
	I0507 19:54:26.388677    5068 command_runner.go:130] > 3067f16e2e38
	I0507 19:54:26.388677    5068 command_runner.go:130] > 675dcdcafeef
	I0507 19:54:26.388677    5068 command_runner.go:130] > af16a92d7c1c
	I0507 19:54:26.388677    5068 command_runner.go:130] > 75f27faec2ed
	I0507 19:54:26.388677    5068 command_runner.go:130] > 86921e764374
	I0507 19:54:26.388677    5068 command_runner.go:130] > ca0d42037347
	I0507 19:54:26.388677    5068 docker.go:483] Stopping containers: [9550b237d8d7 232351adf489 99af61c6e282 57950c0fdcbe 2d49ad078ed3 aa9692c1fbd3 70cff02905e8 58ebd877d77f 1ad9d5948325 7cefdac2050f 3067f16e2e38 675dcdcafeef af16a92d7c1c 75f27faec2ed 86921e764374 ca0d42037347]
	I0507 19:54:26.396445    5068 ssh_runner.go:195] Run: docker stop 9550b237d8d7 232351adf489 99af61c6e282 57950c0fdcbe 2d49ad078ed3 aa9692c1fbd3 70cff02905e8 58ebd877d77f 1ad9d5948325 7cefdac2050f 3067f16e2e38 675dcdcafeef af16a92d7c1c 75f27faec2ed 86921e764374 ca0d42037347
	I0507 19:54:26.417608    5068 command_runner.go:130] > 9550b237d8d7
	I0507 19:54:26.418623    5068 command_runner.go:130] > 232351adf489
	I0507 19:54:26.418623    5068 command_runner.go:130] > 99af61c6e282
	I0507 19:54:26.418623    5068 command_runner.go:130] > 57950c0fdcbe
	I0507 19:54:26.418623    5068 command_runner.go:130] > 2d49ad078ed3
	I0507 19:54:26.418623    5068 command_runner.go:130] > aa9692c1fbd3
	I0507 19:54:26.418623    5068 command_runner.go:130] > 70cff02905e8
	I0507 19:54:26.418623    5068 command_runner.go:130] > 58ebd877d77f
	I0507 19:54:26.418623    5068 command_runner.go:130] > 1ad9d5948325
	I0507 19:54:26.418623    5068 command_runner.go:130] > 7cefdac2050f
	I0507 19:54:26.418623    5068 command_runner.go:130] > 3067f16e2e38
	I0507 19:54:26.418623    5068 command_runner.go:130] > 675dcdcafeef
	I0507 19:54:26.418623    5068 command_runner.go:130] > af16a92d7c1c
	I0507 19:54:26.418623    5068 command_runner.go:130] > 75f27faec2ed
	I0507 19:54:26.418623    5068 command_runner.go:130] > 86921e764374
	I0507 19:54:26.418623    5068 command_runner.go:130] > ca0d42037347
	I0507 19:54:26.428246    5068 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0507 19:54:26.468106    5068 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0507 19:54:26.484168    5068 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0507 19:54:26.484168    5068 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0507 19:54:26.484168    5068 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0507 19:54:26.484168    5068 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0507 19:54:26.484168    5068 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0507 19:54:26.484168    5068 kubeadm.go:156] found existing configuration files:
	
	I0507 19:54:26.492934    5068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0507 19:54:26.507299    5068 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0507 19:54:26.507299    5068 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0507 19:54:26.516038    5068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0507 19:54:26.542458    5068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0507 19:54:26.561197    5068 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0507 19:54:26.561197    5068 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0507 19:54:26.569286    5068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0507 19:54:26.596730    5068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0507 19:54:26.611731    5068 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0507 19:54:26.611984    5068 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0507 19:54:26.622268    5068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0507 19:54:26.644568    5068 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0507 19:54:26.659906    5068 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0507 19:54:26.659984    5068 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0507 19:54:26.670950    5068 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0507 19:54:26.695589    5068 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0507 19:54:26.710698    5068 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0507 19:54:26.906864    5068 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0507 19:54:26.906864    5068 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0507 19:54:26.906864    5068 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0507 19:54:26.906864    5068 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0507 19:54:26.906864    5068 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0507 19:54:26.906864    5068 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0507 19:54:26.906864    5068 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0507 19:54:26.906864    5068 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0507 19:54:26.906864    5068 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0507 19:54:26.906864    5068 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0507 19:54:26.906864    5068 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0507 19:54:26.906864    5068 command_runner.go:130] > [certs] Using the existing "sa" key
	I0507 19:54:26.906864    5068 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0507 19:54:28.084580    5068 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0507 19:54:28.084685    5068 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0507 19:54:28.084685    5068 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0507 19:54:28.084685    5068 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0507 19:54:28.084685    5068 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0507 19:54:28.084685    5068 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0507 19:54:28.084769    5068 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.1778295s)
	I0507 19:54:28.084853    5068 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0507 19:54:28.356216    5068 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0507 19:54:28.356216    5068 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0507 19:54:28.356216    5068 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0507 19:54:28.356216    5068 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0507 19:54:28.455514    5068 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0507 19:54:28.455616    5068 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0507 19:54:28.455616    5068 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0507 19:54:28.455616    5068 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0507 19:54:28.455724    5068 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0507 19:54:28.556018    5068 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0507 19:54:28.556186    5068 api_server.go:52] waiting for apiserver process to appear ...
	I0507 19:54:28.567721    5068 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0507 19:54:29.075614    5068 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0507 19:54:29.574385    5068 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0507 19:54:30.080571    5068 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0507 19:54:30.124642    5068 command_runner.go:130] > 1882
	I0507 19:54:30.125926    5068 api_server.go:72] duration metric: took 1.5696894s to wait for apiserver process to appear ...
	I0507 19:54:30.125988    5068 api_server.go:88] waiting for apiserver healthz status ...
	I0507 19:54:30.126069    5068 api_server.go:253] Checking apiserver healthz at https://172.19.135.22:8443/healthz ...
	I0507 19:54:33.390965    5068 api_server.go:279] https://172.19.135.22:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0507 19:54:33.390965    5068 api_server.go:103] status: https://172.19.135.22:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0507 19:54:33.390965    5068 api_server.go:253] Checking apiserver healthz at https://172.19.135.22:8443/healthz ...
	I0507 19:54:33.487487    5068 api_server.go:279] https://172.19.135.22:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0507 19:54:33.487643    5068 api_server.go:103] status: https://172.19.135.22:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[-]poststarthook/apiservice-registration-controller failed: reason withheld
	[+]poststarthook/apiservice-status-available-controller ok
	[-]poststarthook/apiservice-discovery-controller failed: reason withheld
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0507 19:54:33.630279    5068 api_server.go:253] Checking apiserver healthz at https://172.19.135.22:8443/healthz ...
	I0507 19:54:33.663143    5068 api_server.go:279] https://172.19.135.22:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0507 19:54:33.663143    5068 api_server.go:103] status: https://172.19.135.22:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0507 19:54:34.136047    5068 api_server.go:253] Checking apiserver healthz at https://172.19.135.22:8443/healthz ...
	I0507 19:54:34.142905    5068 api_server.go:279] https://172.19.135.22:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0507 19:54:34.142962    5068 api_server.go:103] status: https://172.19.135.22:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0507 19:54:34.639619    5068 api_server.go:253] Checking apiserver healthz at https://172.19.135.22:8443/healthz ...
	I0507 19:54:34.656622    5068 api_server.go:279] https://172.19.135.22:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0507 19:54:34.657675    5068 api_server.go:103] status: https://172.19.135.22:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0507 19:54:35.132173    5068 api_server.go:253] Checking apiserver healthz at https://172.19.135.22:8443/healthz ...
	I0507 19:54:35.143656    5068 api_server.go:279] https://172.19.135.22:8443/healthz returned 200:
	ok
	I0507 19:54:35.143905    5068 round_trippers.go:463] GET https://172.19.135.22:8443/version
	I0507 19:54:35.143965    5068 round_trippers.go:469] Request Headers:
	I0507 19:54:35.143965    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:54:35.143965    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:54:35.158446    5068 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0507 19:54:35.158626    5068 round_trippers.go:577] Response Headers:
	I0507 19:54:35.158626    5068 round_trippers.go:580]     Content-Length: 263
	I0507 19:54:35.158626    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:54:35 GMT
	I0507 19:54:35.158626    5068 round_trippers.go:580]     Audit-Id: caa36cae-0d24-4fd4-bdea-225b9a482822
	I0507 19:54:35.158626    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:54:35.158626    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:54:35.158626    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:54:35.158626    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:54:35.158626    5068 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.0",
	  "gitCommit": "7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a",
	  "gitTreeState": "clean",
	  "buildDate": "2024-04-17T17:27:03Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0507 19:54:35.158626    5068 api_server.go:141] control plane version: v1.30.0
	I0507 19:54:35.158626    5068 api_server.go:131] duration metric: took 5.0323125s to wait for apiserver health ...
	I0507 19:54:35.158626    5068 cni.go:84] Creating CNI manager for ""
	I0507 19:54:35.158626    5068 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0507 19:54:35.161350    5068 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0507 19:54:35.175022    5068 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0507 19:54:35.184381    5068 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0507 19:54:35.184455    5068 command_runner.go:130] >   Size: 2694104   	Blocks: 5264       IO Block: 4096   regular file
	I0507 19:54:35.184455    5068 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0507 19:54:35.184455    5068 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0507 19:54:35.184455    5068 command_runner.go:130] > Access: 2024-05-07 19:53:12.464646300 +0000
	I0507 19:54:35.184455    5068 command_runner.go:130] > Modify: 2024-04-30 23:29:30.000000000 +0000
	I0507 19:54:35.184455    5068 command_runner.go:130] > Change: 2024-05-07 19:53:02.787000000 +0000
	I0507 19:54:35.184455    5068 command_runner.go:130] >  Birth: -
	I0507 19:54:35.184519    5068 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0507 19:54:35.184578    5068 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0507 19:54:35.234857    5068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0507 19:54:36.031018    5068 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0507 19:54:36.031018    5068 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0507 19:54:36.031018    5068 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0507 19:54:36.031018    5068 command_runner.go:130] > daemonset.apps/kindnet configured
	I0507 19:54:36.031202    5068 system_pods.go:43] waiting for kube-system pods to appear ...
	I0507 19:54:36.031444    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods
	I0507 19:54:36.031499    5068 round_trippers.go:469] Request Headers:
	I0507 19:54:36.031557    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:54:36.031557    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:54:36.040436    5068 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0507 19:54:36.040436    5068 round_trippers.go:577] Response Headers:
	I0507 19:54:36.040436    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:54:36.040580    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:54:36.040580    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:54:36 GMT
	I0507 19:54:36.040580    5068 round_trippers.go:580]     Audit-Id: e940cb2c-5304-4723-b88b-345a825cf6e5
	I0507 19:54:36.040705    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:54:36.040705    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:54:36.042431    5068 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1759"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"1756","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 87127 chars]
	I0507 19:54:36.048405    5068 system_pods.go:59] 12 kube-system pods found
	I0507 19:54:36.048405    5068 system_pods.go:61] "coredns-7db6d8ff4d-5j966" [d067d438-f4af-42e8-930d-3423a3ac211f] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0507 19:54:36.048405    5068 system_pods.go:61] "etcd-multinode-600000" [de6e93ee-7fd0-45cd-82eb-44edd4a2c2e3] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0507 19:54:36.048405    5068 system_pods.go:61] "kindnet-dkxzt" [aa15b7bd-3721-4ba9-91f8-8f4f800a31b0] Running
	I0507 19:54:36.048405    5068 system_pods.go:61] "kindnet-jmlw2" [cfa3d04f-9b15-4394-9404-f3ae09e9a125] Running
	I0507 19:54:36.048405    5068 system_pods.go:61] "kindnet-zw4r9" [b5145a4d-38aa-426e-947f-3480e269470e] Running
	I0507 19:54:36.048405    5068 system_pods.go:61] "kube-apiserver-multinode-600000" [4d9ace3f-e061-42ab-bb1d-3dac545f96a9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0507 19:54:36.048405    5068 system_pods.go:61] "kube-controller-manager-multinode-600000" [b960b526-da40-480d-9a72-9ab8c7f2989a] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0507 19:54:36.048405    5068 system_pods.go:61] "kube-proxy-9fb6t" [f91cc93c-cb87-4494-9e11-b3bf74b9311d] Running
	I0507 19:54:36.048405    5068 system_pods.go:61] "kube-proxy-c9gw5" [9a39807c-6243-4aa2-86f4-8626031c80a6] Running
	I0507 19:54:36.048405    5068 system_pods.go:61] "kube-proxy-pzn8q" [f2506861-1f09-4193-b751-22a685a0b71b] Running
	I0507 19:54:36.048405    5068 system_pods.go:61] "kube-scheduler-multinode-600000" [ec3ac949-cb83-49be-a908-c93e23135ae8] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0507 19:54:36.048405    5068 system_pods.go:61] "storage-provisioner" [90142b77-53fb-42e1-94f8-7f8a3c7765ac] Running
	I0507 19:54:36.048405    5068 system_pods.go:74] duration metric: took 17.2018ms to wait for pod list to return data ...
	I0507 19:54:36.048405    5068 node_conditions.go:102] verifying NodePressure condition ...
	I0507 19:54:36.048405    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes
	I0507 19:54:36.048405    5068 round_trippers.go:469] Request Headers:
	I0507 19:54:36.048405    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:54:36.049367    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:54:36.052728    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:54:36.052785    5068 round_trippers.go:577] Response Headers:
	I0507 19:54:36.052785    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:54:36.052785    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:54:36.052785    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:54:36.052785    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:54:36 GMT
	I0507 19:54:36.052785    5068 round_trippers.go:580]     Audit-Id: 4c6a0656-a7c7-4324-8008-2a906dc5aad7
	I0507 19:54:36.052785    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:54:36.053306    5068 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1759"},"items":[{"metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1674","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15628 chars]
	I0507 19:54:36.054282    5068 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0507 19:54:36.054377    5068 node_conditions.go:123] node cpu capacity is 2
	I0507 19:54:36.054377    5068 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0507 19:54:36.054377    5068 node_conditions.go:123] node cpu capacity is 2
	I0507 19:54:36.054377    5068 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0507 19:54:36.054377    5068 node_conditions.go:123] node cpu capacity is 2
	I0507 19:54:36.054377    5068 node_conditions.go:105] duration metric: took 5.971ms to run NodePressure ...
	I0507 19:54:36.054377    5068 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0507 19:54:36.257370    5068 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0507 19:54:36.349282    5068 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0507 19:54:36.351162    5068 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0507 19:54:36.351441    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0507 19:54:36.351441    5068 round_trippers.go:469] Request Headers:
	I0507 19:54:36.351441    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:54:36.351441    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:54:36.355730    5068 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:54:36.355730    5068 round_trippers.go:577] Response Headers:
	I0507 19:54:36.355818    5068 round_trippers.go:580]     Audit-Id: 462e3640-b9e5-4def-bf42-0c99e5c9dec7
	I0507 19:54:36.355818    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:54:36.355818    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:54:36.355818    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:54:36.355818    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:54:36.355818    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:54:36 GMT
	I0507 19:54:36.356715    5068 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1761"},"items":[{"metadata":{"name":"etcd-multinode-600000","namespace":"kube-system","uid":"de6e93ee-7fd0-45cd-82eb-44edd4a2c2e3","resourceVersion":"1737","creationTimestamp":"2024-05-07T19:54:33Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.135.22:2379","kubernetes.io/config.hash":"1581bf6b00d338797c8fb8b10b74abde","kubernetes.io/config.mirror":"1581bf6b00d338797c8fb8b10b74abde","kubernetes.io/config.seen":"2024-05-07T19:54:28.831640546Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:54:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotatio
ns":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f: [truncated 30532 chars]
	I0507 19:54:36.358836    5068 kubeadm.go:733] kubelet initialised
	I0507 19:54:36.358836    5068 kubeadm.go:734] duration metric: took 7.5533ms waiting for restarted kubelet to initialise ...
	I0507 19:54:36.358836    5068 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0507 19:54:36.358836    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods
	I0507 19:54:36.358836    5068 round_trippers.go:469] Request Headers:
	I0507 19:54:36.358836    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:54:36.358836    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:54:36.363639    5068 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:54:36.363639    5068 round_trippers.go:577] Response Headers:
	I0507 19:54:36.363639    5068 round_trippers.go:580]     Audit-Id: b6931e12-d578-482f-80b4-c6b45e5e1177
	I0507 19:54:36.363639    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:54:36.363639    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:54:36.363639    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:54:36.363639    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:54:36.363639    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:54:36 GMT
	I0507 19:54:36.364540    5068 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1761"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"1756","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 87127 chars]
	I0507 19:54:36.367720    5068 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-5j966" in "kube-system" namespace to be "Ready" ...
	I0507 19:54:36.368251    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5j966
	I0507 19:54:36.368251    5068 round_trippers.go:469] Request Headers:
	I0507 19:54:36.368251    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:54:36.368251    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:54:36.374358    5068 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 19:54:36.374358    5068 round_trippers.go:577] Response Headers:
	I0507 19:54:36.374358    5068 round_trippers.go:580]     Audit-Id: 3f362fb0-dee5-4251-b78f-87768d9a944b
	I0507 19:54:36.374358    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:54:36.374358    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:54:36.374358    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:54:36.374358    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:54:36.374358    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:54:36 GMT
	I0507 19:54:36.374358    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"1756","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0507 19:54:36.375489    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:54:36.375489    5068 round_trippers.go:469] Request Headers:
	I0507 19:54:36.375489    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:54:36.375489    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:54:36.377710    5068 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:54:36.377710    5068 round_trippers.go:577] Response Headers:
	I0507 19:54:36.377710    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:54:36.377710    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:54:36 GMT
	I0507 19:54:36.377710    5068 round_trippers.go:580]     Audit-Id: bb1969e8-f265-455f-9914-905ae791d23c
	I0507 19:54:36.377710    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:54:36.377710    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:54:36.377710    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:54:36.378744    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1674","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0507 19:54:36.379218    5068 pod_ready.go:97] node "multinode-600000" hosting pod "coredns-7db6d8ff4d-5j966" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-600000" has status "Ready":"False"
	I0507 19:54:36.379218    5068 pod_ready.go:81] duration metric: took 11.4975ms for pod "coredns-7db6d8ff4d-5j966" in "kube-system" namespace to be "Ready" ...
	E0507 19:54:36.379218    5068 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-600000" hosting pod "coredns-7db6d8ff4d-5j966" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-600000" has status "Ready":"False"
	I0507 19:54:36.379312    5068 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-600000" in "kube-system" namespace to be "Ready" ...
	I0507 19:54:36.379312    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-600000
	I0507 19:54:36.379312    5068 round_trippers.go:469] Request Headers:
	I0507 19:54:36.379416    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:54:36.379416    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:54:36.381506    5068 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:54:36.381506    5068 round_trippers.go:577] Response Headers:
	I0507 19:54:36.381506    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:54:36.381506    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:54:36.381506    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:54:36.381506    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:54:36 GMT
	I0507 19:54:36.381506    5068 round_trippers.go:580]     Audit-Id: 2aa10061-ac68-4de2-9172-2a0a40e6b9df
	I0507 19:54:36.381506    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:54:36.382376    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-600000","namespace":"kube-system","uid":"de6e93ee-7fd0-45cd-82eb-44edd4a2c2e3","resourceVersion":"1737","creationTimestamp":"2024-05-07T19:54:33Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.135.22:2379","kubernetes.io/config.hash":"1581bf6b00d338797c8fb8b10b74abde","kubernetes.io/config.mirror":"1581bf6b00d338797c8fb8b10b74abde","kubernetes.io/config.seen":"2024-05-07T19:54:28.831640546Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:54:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6384 chars]
	I0507 19:54:36.382376    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:54:36.382376    5068 round_trippers.go:469] Request Headers:
	I0507 19:54:36.382376    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:54:36.382376    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:54:36.384984    5068 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:54:36.384984    5068 round_trippers.go:577] Response Headers:
	I0507 19:54:36.384984    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:54:36.384984    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:54:36.384984    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:54:36 GMT
	I0507 19:54:36.384984    5068 round_trippers.go:580]     Audit-Id: b8f639b5-d611-43ac-a5dc-e5e3d205b29a
	I0507 19:54:36.384984    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:54:36.384984    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:54:36.384984    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1674","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0507 19:54:36.385991    5068 pod_ready.go:97] node "multinode-600000" hosting pod "etcd-multinode-600000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-600000" has status "Ready":"False"
	I0507 19:54:36.385991    5068 pod_ready.go:81] duration metric: took 6.6791ms for pod "etcd-multinode-600000" in "kube-system" namespace to be "Ready" ...
	E0507 19:54:36.385991    5068 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-600000" hosting pod "etcd-multinode-600000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-600000" has status "Ready":"False"
	I0507 19:54:36.385991    5068 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-600000" in "kube-system" namespace to be "Ready" ...
	I0507 19:54:36.385991    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-600000
	I0507 19:54:36.385991    5068 round_trippers.go:469] Request Headers:
	I0507 19:54:36.385991    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:54:36.385991    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:54:36.388800    5068 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:54:36.388800    5068 round_trippers.go:577] Response Headers:
	I0507 19:54:36.388800    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:54:36.388800    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:54:36.388800    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:54:36.388800    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:54:36 GMT
	I0507 19:54:36.388800    5068 round_trippers.go:580]     Audit-Id: 588fc4ec-0089-4b8c-bfcb-13116007cb30
	I0507 19:54:36.388800    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:54:36.389113    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-600000","namespace":"kube-system","uid":"4d9ace3f-e061-42ab-bb1d-3dac545f96a9","resourceVersion":"1739","creationTimestamp":"2024-05-07T19:54:35Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.135.22:8443","kubernetes.io/config.hash":"cd9cba8f94818776ec6d8836322192b3","kubernetes.io/config.mirror":"cd9cba8f94818776ec6d8836322192b3","kubernetes.io/config.seen":"2024-05-07T19:54:28.735132188Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:54:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7939 chars]
	I0507 19:54:36.389589    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:54:36.389589    5068 round_trippers.go:469] Request Headers:
	I0507 19:54:36.389657    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:54:36.389657    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:54:36.392160    5068 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:54:36.392160    5068 round_trippers.go:577] Response Headers:
	I0507 19:54:36.392160    5068 round_trippers.go:580]     Audit-Id: 2fe3a4cb-8cce-4d50-a82b-119b8ad35a7c
	I0507 19:54:36.392160    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:54:36.392160    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:54:36.392160    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:54:36.392160    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:54:36.392160    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:54:36 GMT
	I0507 19:54:36.392160    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1674","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0507 19:54:36.392831    5068 pod_ready.go:97] node "multinode-600000" hosting pod "kube-apiserver-multinode-600000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-600000" has status "Ready":"False"
	I0507 19:54:36.392831    5068 pod_ready.go:81] duration metric: took 6.8397ms for pod "kube-apiserver-multinode-600000" in "kube-system" namespace to be "Ready" ...
	E0507 19:54:36.392831    5068 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-600000" hosting pod "kube-apiserver-multinode-600000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-600000" has status "Ready":"False"
	I0507 19:54:36.392893    5068 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-600000" in "kube-system" namespace to be "Ready" ...
	I0507 19:54:36.392964    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-600000
	I0507 19:54:36.392964    5068 round_trippers.go:469] Request Headers:
	I0507 19:54:36.392964    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:54:36.393025    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:54:36.394719    5068 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0507 19:54:36.394719    5068 round_trippers.go:577] Response Headers:
	I0507 19:54:36.394719    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:54:36.394719    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:54:36.394719    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:54:36 GMT
	I0507 19:54:36.394719    5068 round_trippers.go:580]     Audit-Id: bba9f692-65f9-4501-b7e5-91955b9bf10c
	I0507 19:54:36.394719    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:54:36.394719    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:54:36.395721    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-600000","namespace":"kube-system","uid":"b960b526-da40-480d-9a72-9ab8c7f2989a","resourceVersion":"1680","creationTimestamp":"2024-05-07T19:33:43Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f5d6aa60dc93b5e562f37ed2236c3022","kubernetes.io/config.mirror":"f5d6aa60dc93b5e562f37ed2236c3022","kubernetes.io/config.seen":"2024-05-07T19:33:37.010155750Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7732 chars]
	I0507 19:54:36.432822    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:54:36.433233    5068 round_trippers.go:469] Request Headers:
	I0507 19:54:36.433233    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:54:36.433233    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:54:36.435572    5068 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:54:36.435572    5068 round_trippers.go:577] Response Headers:
	I0507 19:54:36.435572    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:54:36.435572    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:54:36.435572    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:54:36.435572    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:54:36.435572    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:54:36 GMT
	I0507 19:54:36.435572    5068 round_trippers.go:580]     Audit-Id: c8b634a9-317a-4e70-8447-2f2eced0dd13
	I0507 19:54:36.436659    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1674","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0507 19:54:36.437087    5068 pod_ready.go:97] node "multinode-600000" hosting pod "kube-controller-manager-multinode-600000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-600000" has status "Ready":"False"
	I0507 19:54:36.437087    5068 pod_ready.go:81] duration metric: took 44.1912ms for pod "kube-controller-manager-multinode-600000" in "kube-system" namespace to be "Ready" ...
	E0507 19:54:36.437087    5068 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-600000" hosting pod "kube-controller-manager-multinode-600000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-600000" has status "Ready":"False"
	I0507 19:54:36.437087    5068 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-9fb6t" in "kube-system" namespace to be "Ready" ...
	I0507 19:54:36.638099    5068 request.go:629] Waited for 200.845ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9fb6t
	I0507 19:54:36.638381    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9fb6t
	I0507 19:54:36.638381    5068 round_trippers.go:469] Request Headers:
	I0507 19:54:36.638381    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:54:36.638381    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:54:36.642036    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:54:36.642219    5068 round_trippers.go:577] Response Headers:
	I0507 19:54:36.642219    5068 round_trippers.go:580]     Audit-Id: ae64bc50-195d-442f-86be-0203253d15dd
	I0507 19:54:36.642219    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:54:36.642219    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:54:36.642219    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:54:36.642219    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:54:36.642219    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:54:36 GMT
	I0507 19:54:36.642460    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-9fb6t","generateName":"kube-proxy-","namespace":"kube-system","uid":"f91cc93c-cb87-4494-9e11-b3bf74b9311d","resourceVersion":"631","creationTimestamp":"2024-05-07T19:36:39Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"952e0024-0710-460c-920c-3959ceadbd10","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:36:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"952e0024-0710-460c-920c-3959ceadbd10\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5841 chars]
	I0507 19:54:36.840847    5068 request.go:629] Waited for 197.4297ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.135.22:8443/api/v1/nodes/multinode-600000-m02
	I0507 19:54:36.841079    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000-m02
	I0507 19:54:36.841146    5068 round_trippers.go:469] Request Headers:
	I0507 19:54:36.841146    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:54:36.841146    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:54:36.844792    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:54:36.844792    5068 round_trippers.go:577] Response Headers:
	I0507 19:54:36.844792    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:54:36.844792    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:54:36.844792    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:54:37 GMT
	I0507 19:54:36.844792    5068 round_trippers.go:580]     Audit-Id: f7a40aed-a0d0-44b4-bc8c-3da939ea1ef8
	I0507 19:54:36.844792    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:54:36.844792    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:54:36.844792    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000-m02","uid":"4aaf533a-c21c-427b-b48f-82fef83a8fb3","resourceVersion":"1366","creationTimestamp":"2024-05-07T19:36:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_07T19_36_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:36:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3827 chars]
	I0507 19:54:36.845889    5068 pod_ready.go:92] pod "kube-proxy-9fb6t" in "kube-system" namespace has status "Ready":"True"
	I0507 19:54:36.845960    5068 pod_ready.go:81] duration metric: took 408.8463ms for pod "kube-proxy-9fb6t" in "kube-system" namespace to be "Ready" ...
	I0507 19:54:36.845960    5068 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-c9gw5" in "kube-system" namespace to be "Ready" ...
	I0507 19:54:37.042107    5068 request.go:629] Waited for 195.6477ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c9gw5
	I0507 19:54:37.042107    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c9gw5
	I0507 19:54:37.042378    5068 round_trippers.go:469] Request Headers:
	I0507 19:54:37.042427    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:54:37.042484    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:54:37.046900    5068 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:54:37.047013    5068 round_trippers.go:577] Response Headers:
	I0507 19:54:37.047013    5068 round_trippers.go:580]     Audit-Id: 65583d76-5b19-431d-b54c-510d222764df
	I0507 19:54:37.047013    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:54:37.047069    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:54:37.047069    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:54:37.047069    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:54:37.047069    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:54:37 GMT
	I0507 19:54:37.047528    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-c9gw5","generateName":"kube-proxy-","namespace":"kube-system","uid":"9a39807c-6243-4aa2-86f4-8626031c80a6","resourceVersion":"1759","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"952e0024-0710-460c-920c-3959ceadbd10","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"952e0024-0710-460c-920c-3959ceadbd10\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6034 chars]
	I0507 19:54:37.232160    5068 request.go:629] Waited for 183.52ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:54:37.232311    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:54:37.232311    5068 round_trippers.go:469] Request Headers:
	I0507 19:54:37.232381    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:54:37.232405    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:54:37.237858    5068 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 19:54:37.238507    5068 round_trippers.go:577] Response Headers:
	I0507 19:54:37.238507    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:54:37.238507    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:54:37.238507    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:54:37.238507    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:54:37.238507    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:54:37 GMT
	I0507 19:54:37.238507    5068 round_trippers.go:580]     Audit-Id: 99162533-b783-47d2-a289-9dda7e9d8354
	I0507 19:54:37.238960    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1674","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0507 19:54:37.239765    5068 pod_ready.go:97] node "multinode-600000" hosting pod "kube-proxy-c9gw5" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-600000" has status "Ready":"False"
	I0507 19:54:37.239880    5068 pod_ready.go:81] duration metric: took 393.8944ms for pod "kube-proxy-c9gw5" in "kube-system" namespace to be "Ready" ...
	E0507 19:54:37.239930    5068 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-600000" hosting pod "kube-proxy-c9gw5" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-600000" has status "Ready":"False"
	I0507 19:54:37.239966    5068 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-pzn8q" in "kube-system" namespace to be "Ready" ...
	I0507 19:54:37.434275    5068 request.go:629] Waited for 193.9429ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pzn8q
	I0507 19:54:37.434392    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pzn8q
	I0507 19:54:37.434392    5068 round_trippers.go:469] Request Headers:
	I0507 19:54:37.434392    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:54:37.434392    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:54:37.436813    5068 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:54:37.436813    5068 round_trippers.go:577] Response Headers:
	I0507 19:54:37.436813    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:54:37.436813    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:54:37.436813    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:54:37 GMT
	I0507 19:54:37.436813    5068 round_trippers.go:580]     Audit-Id: a0081b01-cc00-4bc4-8cdd-2144a9e5810a
	I0507 19:54:37.436813    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:54:37.436813    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:54:37.437763    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-pzn8q","generateName":"kube-proxy-","namespace":"kube-system","uid":"f2506861-1f09-4193-b751-22a685a0b71b","resourceVersion":"1643","creationTimestamp":"2024-05-07T19:40:53Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"952e0024-0710-460c-920c-3959ceadbd10","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:40:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"952e0024-0710-460c-920c-3959ceadbd10\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6057 chars]
	I0507 19:54:37.637777    5068 request.go:629] Waited for 199.4066ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.135.22:8443/api/v1/nodes/multinode-600000-m03
	I0507 19:54:37.637977    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000-m03
	I0507 19:54:37.637977    5068 round_trippers.go:469] Request Headers:
	I0507 19:54:37.637977    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:54:37.637977    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:54:37.641220    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:54:37.641220    5068 round_trippers.go:577] Response Headers:
	I0507 19:54:37.641220    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:54:37.641220    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:54:37.641220    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:54:37 GMT
	I0507 19:54:37.641220    5068 round_trippers.go:580]     Audit-Id: ae75a5e6-374f-474a-b334-c1f7505887be
	I0507 19:54:37.641220    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:54:37.641220    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:54:37.641220    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000-m03","uid":"ec7533ad-814b-49fe-bc8d-a070f7fb171f","resourceVersion":"1653","creationTimestamp":"2024-05-07T19:50:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_07T19_50_26_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:50:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4398 chars]
	I0507 19:54:37.642199    5068 pod_ready.go:97] node "multinode-600000-m03" hosting pod "kube-proxy-pzn8q" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-600000-m03" has status "Ready":"Unknown"
	I0507 19:54:37.642199    5068 pod_ready.go:81] duration metric: took 402.2071ms for pod "kube-proxy-pzn8q" in "kube-system" namespace to be "Ready" ...
	E0507 19:54:37.642269    5068 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-600000-m03" hosting pod "kube-proxy-pzn8q" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-600000-m03" has status "Ready":"Unknown"
	I0507 19:54:37.642269    5068 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-600000" in "kube-system" namespace to be "Ready" ...
	I0507 19:54:37.842611    5068 request.go:629] Waited for 200.0851ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-600000
	I0507 19:54:37.842795    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-600000
	I0507 19:54:37.842795    5068 round_trippers.go:469] Request Headers:
	I0507 19:54:37.842795    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:54:37.842795    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:54:37.845465    5068 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:54:37.845465    5068 round_trippers.go:577] Response Headers:
	I0507 19:54:37.845465    5068 round_trippers.go:580]     Audit-Id: d0beb5cd-5db8-448e-8fab-9da9fd995cf3
	I0507 19:54:37.845465    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:54:37.845465    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:54:37.845465    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:54:37.845465    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:54:37.845465    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:54:38 GMT
	I0507 19:54:37.846575    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-600000","namespace":"kube-system","uid":"ec3ac949-cb83-49be-a908-c93e23135ae8","resourceVersion":"1732","creationTimestamp":"2024-05-07T19:33:44Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7c4ee79f6d4f6adb00b636f817445fef","kubernetes.io/config.mirror":"7c4ee79f6d4f6adb00b636f817445fef","kubernetes.io/config.seen":"2024-05-07T19:33:44.165677427Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5444 chars]
	I0507 19:54:38.032135    5068 request.go:629] Waited for 184.6019ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:54:38.032443    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:54:38.032443    5068 round_trippers.go:469] Request Headers:
	I0507 19:54:38.032443    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:54:38.032443    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:54:38.035149    5068 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:54:38.035876    5068 round_trippers.go:577] Response Headers:
	I0507 19:54:38.035876    5068 round_trippers.go:580]     Audit-Id: 387f21b9-f5a3-4e18-a786-fe0e75b1d53d
	I0507 19:54:38.035876    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:54:38.035876    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:54:38.035876    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:54:38.035876    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:54:38.036010    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:54:38 GMT
	I0507 19:54:38.036180    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1674","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0507 19:54:38.036262    5068 pod_ready.go:97] node "multinode-600000" hosting pod "kube-scheduler-multinode-600000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-600000" has status "Ready":"False"
	I0507 19:54:38.036262    5068 pod_ready.go:81] duration metric: took 393.9673ms for pod "kube-scheduler-multinode-600000" in "kube-system" namespace to be "Ready" ...
	E0507 19:54:38.036262    5068 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-600000" hosting pod "kube-scheduler-multinode-600000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-600000" has status "Ready":"False"
	I0507 19:54:38.036262    5068 pod_ready.go:38] duration metric: took 1.6773182s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0507 19:54:38.036262    5068 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0507 19:54:38.058776    5068 command_runner.go:130] > -16
	I0507 19:54:38.058832    5068 ops.go:34] apiserver oom_adj: -16
	I0507 19:54:38.058832    5068 kubeadm.go:591] duration metric: took 11.7735063s to restartPrimaryControlPlane
	I0507 19:54:38.058890    5068 kubeadm.go:393] duration metric: took 11.8269527s to StartCluster
	I0507 19:54:38.058890    5068 settings.go:142] acquiring lock: {Name:mk66ab2e0bae08b477c4ed9caa26e688e6ce3248 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 19:54:38.059054    5068 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0507 19:54:38.060382    5068 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\kubeconfig: {Name:mkb19224ea40e2aed3ce8c31a956f5aee129caa9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 19:54:38.061933    5068 start.go:234] Will wait 6m0s for node &{Name: IP:172.19.135.22 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0507 19:54:38.061933    5068 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0507 19:54:38.067180    5068 out.go:177] * Verifying Kubernetes components...
	I0507 19:54:38.069609    5068 out.go:177] * Enabled addons: 
	I0507 19:54:38.062317    5068 config.go:182] Loaded profile config "multinode-600000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 19:54:38.073286    5068 addons.go:505] duration metric: took 11.4202ms for enable addons: enabled=[]
	I0507 19:54:38.082739    5068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 19:54:38.311615    5068 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0507 19:54:38.334228    5068 node_ready.go:35] waiting up to 6m0s for node "multinode-600000" to be "Ready" ...
	I0507 19:54:38.335056    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:54:38.335056    5068 round_trippers.go:469] Request Headers:
	I0507 19:54:38.335056    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:54:38.335056    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:54:38.338273    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:54:38.338273    5068 round_trippers.go:577] Response Headers:
	I0507 19:54:38.338273    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:54:38.338273    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:54:38.338273    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:54:38.338273    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:54:38 GMT
	I0507 19:54:38.338273    5068 round_trippers.go:580]     Audit-Id: 30d6118d-a6e3-4d3f-8023-52821f755c24
	I0507 19:54:38.338273    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:54:38.339015    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1674","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0507 19:54:38.834649    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:54:38.834649    5068 round_trippers.go:469] Request Headers:
	I0507 19:54:38.834649    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:54:38.835089    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:54:38.838425    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:54:38.838425    5068 round_trippers.go:577] Response Headers:
	I0507 19:54:38.838425    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:54:38.838425    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:54:38.838425    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:54:39 GMT
	I0507 19:54:38.838425    5068 round_trippers.go:580]     Audit-Id: 53d25f47-a658-421c-b84f-6c04c1f0a74f
	I0507 19:54:38.838425    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:54:38.838425    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:54:38.839334    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1674","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0507 19:54:39.337568    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:54:39.337648    5068 round_trippers.go:469] Request Headers:
	I0507 19:54:39.337648    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:54:39.337648    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:54:39.340969    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:54:39.341373    5068 round_trippers.go:577] Response Headers:
	I0507 19:54:39.341436    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:54:39 GMT
	I0507 19:54:39.341436    5068 round_trippers.go:580]     Audit-Id: b01dfe76-bed4-4042-8d64-b840710860db
	I0507 19:54:39.341436    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:54:39.341436    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:54:39.341436    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:54:39.341436    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:54:39.341714    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1674","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0507 19:54:39.837270    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:54:39.837270    5068 round_trippers.go:469] Request Headers:
	I0507 19:54:39.837270    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:54:39.837270    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:54:39.841001    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:54:39.841001    5068 round_trippers.go:577] Response Headers:
	I0507 19:54:39.841001    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:54:39.841001    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:54:39.841001    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:54:40 GMT
	I0507 19:54:39.841001    5068 round_trippers.go:580]     Audit-Id: 9877d43d-3391-40ab-aa1d-ca7068cbf510
	I0507 19:54:39.841001    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:54:39.841001    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:54:39.841001    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1674","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0507 19:54:40.341638    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:54:40.341638    5068 round_trippers.go:469] Request Headers:
	I0507 19:54:40.341638    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:54:40.341638    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:54:40.345753    5068 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:54:40.345753    5068 round_trippers.go:577] Response Headers:
	I0507 19:54:40.345753    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:54:40.345753    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:54:40.345753    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:54:40.345753    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:54:40.345753    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:54:40 GMT
	I0507 19:54:40.345753    5068 round_trippers.go:580]     Audit-Id: 528e0535-d66f-444f-afd3-56d8ef75784d
	I0507 19:54:40.345753    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1674","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0507 19:54:40.346992    5068 node_ready.go:53] node "multinode-600000" has status "Ready":"False"
	I0507 19:54:40.840449    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:54:40.840517    5068 round_trippers.go:469] Request Headers:
	I0507 19:54:40.840517    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:54:40.840517    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:54:40.845037    5068 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:54:40.845037    5068 round_trippers.go:577] Response Headers:
	I0507 19:54:40.845037    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:54:41 GMT
	I0507 19:54:40.845037    5068 round_trippers.go:580]     Audit-Id: 25343fe0-9e72-4622-bf7b-deee6b5833ef
	I0507 19:54:40.845037    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:54:40.845037    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:54:40.845037    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:54:40.845037    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:54:40.845037    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1674","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0507 19:54:41.338432    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:54:41.338432    5068 round_trippers.go:469] Request Headers:
	I0507 19:54:41.338655    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:54:41.338690    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:54:41.341820    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:54:41.341820    5068 round_trippers.go:577] Response Headers:
	I0507 19:54:41.341820    5068 round_trippers.go:580]     Audit-Id: 66a3f904-96f3-4509-b12c-40e5ddfd2a29
	I0507 19:54:41.341820    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:54:41.341820    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:54:41.341820    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:54:41.341820    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:54:41.341820    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:54:41 GMT
	I0507 19:54:41.342783    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1674","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0507 19:54:41.838236    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:54:41.838548    5068 round_trippers.go:469] Request Headers:
	I0507 19:54:41.838548    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:54:41.838548    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:54:41.840811    5068 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:54:41.840811    5068 round_trippers.go:577] Response Headers:
	I0507 19:54:41.840811    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:54:41.840811    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:54:41.840811    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:54:41.840811    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:54:42 GMT
	I0507 19:54:41.840811    5068 round_trippers.go:580]     Audit-Id: 31c03d46-ecad-481b-aff8-424ae611fce1
	I0507 19:54:41.840811    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:54:41.841862    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1674","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0507 19:54:42.336602    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:54:42.336938    5068 round_trippers.go:469] Request Headers:
	I0507 19:54:42.336938    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:54:42.336938    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:54:42.340307    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:54:42.340307    5068 round_trippers.go:577] Response Headers:
	I0507 19:54:42.340307    5068 round_trippers.go:580]     Audit-Id: 26e30d0b-f4d9-4220-a900-a789dbcf596e
	I0507 19:54:42.340307    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:54:42.340307    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:54:42.341003    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:54:42.341003    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:54:42.341003    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:54:42 GMT
	I0507 19:54:42.341413    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1674","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0507 19:54:42.835863    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:54:42.835997    5068 round_trippers.go:469] Request Headers:
	I0507 19:54:42.835997    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:54:42.835997    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:54:42.843531    5068 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0507 19:54:42.843531    5068 round_trippers.go:577] Response Headers:
	I0507 19:54:42.843531    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:54:42.843531    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:54:42.843531    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:54:42.843531    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:54:43 GMT
	I0507 19:54:42.843531    5068 round_trippers.go:580]     Audit-Id: 75a981b4-f7c4-488a-b913-51fc1934e298
	I0507 19:54:42.843531    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:54:42.843531    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1674","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0507 19:54:42.844082    5068 node_ready.go:53] node "multinode-600000" has status "Ready":"False"
	I0507 19:54:43.336097    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:54:43.336097    5068 round_trippers.go:469] Request Headers:
	I0507 19:54:43.336097    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:54:43.336097    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:54:43.340439    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:54:43.340439    5068 round_trippers.go:577] Response Headers:
	I0507 19:54:43.340439    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:54:43.340439    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:54:43.340439    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:54:43.340439    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:54:43.340439    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:54:43 GMT
	I0507 19:54:43.340439    5068 round_trippers.go:580]     Audit-Id: 25aea8c5-ab1b-40d5-9968-b22f0eef5fb8
	I0507 19:54:43.340956    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1674","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0507 19:54:43.835928    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:54:43.835928    5068 round_trippers.go:469] Request Headers:
	I0507 19:54:43.836022    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:54:43.836022    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:54:43.839462    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:54:43.839462    5068 round_trippers.go:577] Response Headers:
	I0507 19:54:43.839462    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:54:44 GMT
	I0507 19:54:43.839462    5068 round_trippers.go:580]     Audit-Id: 09ad96eb-261d-4d97-b175-7d5590e74d57
	I0507 19:54:43.839462    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:54:43.839740    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:54:43.839740    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:54:43.839740    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:54:43.839812    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1674","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0507 19:54:44.350285    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:54:44.350402    5068 round_trippers.go:469] Request Headers:
	I0507 19:54:44.350402    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:54:44.350496    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:54:44.354691    5068 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:54:44.354691    5068 round_trippers.go:577] Response Headers:
	I0507 19:54:44.354691    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:54:44 GMT
	I0507 19:54:44.354691    5068 round_trippers.go:580]     Audit-Id: 39a67830-b88e-415d-96dc-0d0aaf5d6747
	I0507 19:54:44.354933    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:54:44.354933    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:54:44.354933    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:54:44.354933    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:54:44.355102    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1674","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0507 19:54:44.848641    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:54:44.848641    5068 round_trippers.go:469] Request Headers:
	I0507 19:54:44.848641    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:54:44.848641    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:54:44.851015    5068 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:54:44.851015    5068 round_trippers.go:577] Response Headers:
	I0507 19:54:44.851015    5068 round_trippers.go:580]     Audit-Id: 7680049c-a20a-4e11-8f86-6bbc2cb1aa08
	I0507 19:54:44.851015    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:54:44.851015    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:54:44.851015    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:54:44.851015    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:54:44.851015    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:54:45 GMT
	I0507 19:54:44.851888    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1674","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0507 19:54:44.852649    5068 node_ready.go:53] node "multinode-600000" has status "Ready":"False"
	I0507 19:54:45.346891    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:54:45.346891    5068 round_trippers.go:469] Request Headers:
	I0507 19:54:45.346891    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:54:45.346891    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:54:45.350506    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:54:45.351116    5068 round_trippers.go:577] Response Headers:
	I0507 19:54:45.351215    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:54:45.351215    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:54:45 GMT
	I0507 19:54:45.351215    5068 round_trippers.go:580]     Audit-Id: cb947d50-f5c1-4775-9fca-2b37d797f761
	I0507 19:54:45.351215    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:54:45.351215    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:54:45.351215    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:54:45.351538    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1674","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0507 19:54:45.848688    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:54:45.848850    5068 round_trippers.go:469] Request Headers:
	I0507 19:54:45.848850    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:54:45.848850    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:54:45.852137    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:54:45.852137    5068 round_trippers.go:577] Response Headers:
	I0507 19:54:45.852602    5068 round_trippers.go:580]     Audit-Id: b2718bba-8627-431f-8e98-222150fb29a5
	I0507 19:54:45.852602    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:54:45.852602    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:54:45.852602    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:54:45.852602    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:54:45.852602    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:54:46 GMT
	I0507 19:54:45.852939    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1674","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5365 chars]
	I0507 19:54:46.337027    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:54:46.337094    5068 round_trippers.go:469] Request Headers:
	I0507 19:54:46.337270    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:54:46.337270    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:54:46.339925    5068 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:54:46.339925    5068 round_trippers.go:577] Response Headers:
	I0507 19:54:46.339925    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:54:46.339925    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:54:46.339925    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:54:46.339925    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:54:46 GMT
	I0507 19:54:46.339925    5068 round_trippers.go:580]     Audit-Id: 87346c83-e4b6-413a-9680-0eff6961133d
	I0507 19:54:46.340682    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:54:46.340885    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1793","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0507 19:54:46.850328    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:54:46.850328    5068 round_trippers.go:469] Request Headers:
	I0507 19:54:46.850328    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:54:46.850328    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:54:46.854150    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:54:46.854150    5068 round_trippers.go:577] Response Headers:
	I0507 19:54:46.854150    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:54:46.854150    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:54:46.854150    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:54:46.854150    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:54:46.854150    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:54:47 GMT
	I0507 19:54:46.854150    5068 round_trippers.go:580]     Audit-Id: 0f75556f-26e3-43ee-8c66-9ba6bea59479
	I0507 19:54:46.854495    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1793","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0507 19:54:46.855541    5068 node_ready.go:53] node "multinode-600000" has status "Ready":"False"
	I0507 19:54:47.335318    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:54:47.335318    5068 round_trippers.go:469] Request Headers:
	I0507 19:54:47.335318    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:54:47.335318    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:54:47.339112    5068 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:54:47.339112    5068 round_trippers.go:577] Response Headers:
	I0507 19:54:47.339215    5068 round_trippers.go:580]     Audit-Id: 28e5290f-fbf4-4482-9965-a4ae310ac9c3
	I0507 19:54:47.339215    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:54:47.339215    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:54:47.339251    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:54:47.339251    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:54:47.339266    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:54:47 GMT
	I0507 19:54:47.339266    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1793","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0507 19:54:47.836778    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:54:47.836778    5068 round_trippers.go:469] Request Headers:
	I0507 19:54:47.836778    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:54:47.836778    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:54:47.840365    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:54:47.840365    5068 round_trippers.go:577] Response Headers:
	I0507 19:54:47.840365    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:54:47.840365    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:54:47.840365    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:54:47.840365    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:54:48 GMT
	I0507 19:54:47.840365    5068 round_trippers.go:580]     Audit-Id: 6c0f332b-a162-46e9-b88b-f2b20aeaa1a7
	I0507 19:54:47.840365    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:54:47.840774    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1793","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0507 19:54:48.338764    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:54:48.338859    5068 round_trippers.go:469] Request Headers:
	I0507 19:54:48.338859    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:54:48.338859    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:54:48.342141    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:54:48.342141    5068 round_trippers.go:577] Response Headers:
	I0507 19:54:48.342141    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:54:48.342141    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:54:48.342336    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:54:48 GMT
	I0507 19:54:48.342336    5068 round_trippers.go:580]     Audit-Id: fa33c927-2239-4be8-aed3-b58892d613d9
	I0507 19:54:48.342336    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:54:48.342336    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:54:48.342483    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1793","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0507 19:54:48.837793    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:54:48.837793    5068 round_trippers.go:469] Request Headers:
	I0507 19:54:48.837793    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:54:48.837793    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:54:48.841432    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:54:48.841432    5068 round_trippers.go:577] Response Headers:
	I0507 19:54:48.841546    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:54:48.841546    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:54:49 GMT
	I0507 19:54:48.841546    5068 round_trippers.go:580]     Audit-Id: dcd7af56-30fb-4b02-9dfc-d3268b0e9e27
	I0507 19:54:48.841546    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:54:48.841546    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:54:48.841546    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:54:48.841696    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1793","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0507 19:54:49.337250    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:54:49.337250    5068 round_trippers.go:469] Request Headers:
	I0507 19:54:49.337250    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:54:49.337250    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:54:49.340868    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:54:49.341024    5068 round_trippers.go:577] Response Headers:
	I0507 19:54:49.341024    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:54:49.341024    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:54:49.341024    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:54:49.341024    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:54:49.341024    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:54:49 GMT
	I0507 19:54:49.341024    5068 round_trippers.go:580]     Audit-Id: 408a977a-8708-4f30-93e5-96d990e60762
	I0507 19:54:49.341267    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1793","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0507 19:54:49.341697    5068 node_ready.go:53] node "multinode-600000" has status "Ready":"False"
	I0507 19:54:49.850613    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:54:49.850613    5068 round_trippers.go:469] Request Headers:
	I0507 19:54:49.850613    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:54:49.850613    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:54:49.853968    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:54:49.853968    5068 round_trippers.go:577] Response Headers:
	I0507 19:54:49.853968    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:54:49.853968    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:54:49.854583    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:54:50 GMT
	I0507 19:54:49.854583    5068 round_trippers.go:580]     Audit-Id: 77f3fa91-52a6-4a89-8ad1-00d806b9c92e
	I0507 19:54:49.854583    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:54:49.854583    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:54:49.854909    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1793","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0507 19:54:50.337365    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:54:50.337449    5068 round_trippers.go:469] Request Headers:
	I0507 19:54:50.337522    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:54:50.337522    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:54:50.340897    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:54:50.340897    5068 round_trippers.go:577] Response Headers:
	I0507 19:54:50.340897    5068 round_trippers.go:580]     Audit-Id: 07ec31ed-a9f5-4f42-9f36-9243825a55f8
	I0507 19:54:50.340897    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:54:50.340897    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:54:50.340897    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:54:50.340897    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:54:50.340897    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:54:50 GMT
	I0507 19:54:50.341515    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1793","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0507 19:54:50.837123    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:54:50.837218    5068 round_trippers.go:469] Request Headers:
	I0507 19:54:50.837218    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:54:50.837218    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:54:50.841072    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:54:50.841072    5068 round_trippers.go:577] Response Headers:
	I0507 19:54:50.841573    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:54:51 GMT
	I0507 19:54:50.841573    5068 round_trippers.go:580]     Audit-Id: 9876bab3-478c-4f2f-9a55-996b74f9ab35
	I0507 19:54:50.841573    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:54:50.841573    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:54:50.841573    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:54:50.841573    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:54:50.841899    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1793","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0507 19:54:51.348836    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:54:51.348836    5068 round_trippers.go:469] Request Headers:
	I0507 19:54:51.348836    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:54:51.348836    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:54:51.352322    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:54:51.352322    5068 round_trippers.go:577] Response Headers:
	I0507 19:54:51.352575    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:54:51.352575    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:54:51.352575    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:54:51 GMT
	I0507 19:54:51.352575    5068 round_trippers.go:580]     Audit-Id: 558b874c-e458-4ba6-b66c-adf0118ce0b9
	I0507 19:54:51.352575    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:54:51.352575    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:54:51.352731    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1793","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0507 19:54:51.353584    5068 node_ready.go:53] node "multinode-600000" has status "Ready":"False"
	I0507 19:54:51.847679    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:54:51.847846    5068 round_trippers.go:469] Request Headers:
	I0507 19:54:51.847846    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:54:51.847846    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:54:51.851616    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:54:51.851616    5068 round_trippers.go:577] Response Headers:
	I0507 19:54:51.851616    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:54:51.851616    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:54:52 GMT
	I0507 19:54:51.851616    5068 round_trippers.go:580]     Audit-Id: 18ef90fa-cb68-44b6-93b1-0cd5bca890f2
	I0507 19:54:51.851616    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:54:51.851616    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:54:51.851616    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:54:51.851616    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1793","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0507 19:54:52.346373    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:54:52.346468    5068 round_trippers.go:469] Request Headers:
	I0507 19:54:52.346468    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:54:52.346468    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:54:52.351811    5068 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 19:54:52.351811    5068 round_trippers.go:577] Response Headers:
	I0507 19:54:52.351811    5068 round_trippers.go:580]     Audit-Id: cf310bb7-e830-4083-a498-7778fb96fe4f
	I0507 19:54:52.351811    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:54:52.351811    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:54:52.351811    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:54:52.351811    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:54:52.352337    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:54:52 GMT
	I0507 19:54:52.352625    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1793","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0507 19:54:52.845058    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:54:52.845058    5068 round_trippers.go:469] Request Headers:
	I0507 19:54:52.845058    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:54:52.845058    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:54:52.848588    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:54:52.848588    5068 round_trippers.go:577] Response Headers:
	I0507 19:54:52.848588    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:54:52.848588    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:54:52.848588    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:54:52.848588    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:54:53 GMT
	I0507 19:54:52.848588    5068 round_trippers.go:580]     Audit-Id: a48e1996-5aa3-45ef-8c3a-b7bf3774fe56
	I0507 19:54:52.848588    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:54:52.849306    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1793","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0507 19:54:53.345460    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:54:53.345460    5068 round_trippers.go:469] Request Headers:
	I0507 19:54:53.345460    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:54:53.345460    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:54:53.349028    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:54:53.349585    5068 round_trippers.go:577] Response Headers:
	I0507 19:54:53.349585    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:54:53 GMT
	I0507 19:54:53.349585    5068 round_trippers.go:580]     Audit-Id: e133ac86-bfd9-4d7d-a4a6-eaf2e3af5573
	I0507 19:54:53.349585    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:54:53.349585    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:54:53.349585    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:54:53.349585    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:54:53.350018    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1793","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0507 19:54:53.848003    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:54:53.848003    5068 round_trippers.go:469] Request Headers:
	I0507 19:54:53.848003    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:54:53.848003    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:54:53.851717    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:54:53.851717    5068 round_trippers.go:577] Response Headers:
	I0507 19:54:53.851717    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:54:54 GMT
	I0507 19:54:53.852512    5068 round_trippers.go:580]     Audit-Id: 2385b0d1-300a-40b5-a4cb-3cc14a8195bf
	I0507 19:54:53.852512    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:54:53.852512    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:54:53.852512    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:54:53.852512    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:54:53.852804    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1793","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0507 19:54:53.853494    5068 node_ready.go:53] node "multinode-600000" has status "Ready":"False"
	I0507 19:54:54.346701    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:54:54.346701    5068 round_trippers.go:469] Request Headers:
	I0507 19:54:54.346701    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:54:54.346701    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:54:54.350283    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:54:54.350283    5068 round_trippers.go:577] Response Headers:
	I0507 19:54:54.350283    5068 round_trippers.go:580]     Audit-Id: 0fd1fb01-fce2-45f2-9bd1-f601d8f5b43c
	I0507 19:54:54.350283    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:54:54.350283    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:54:54.350550    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:54:54.350550    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:54:54.350550    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:54:54 GMT
	I0507 19:54:54.350794    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1793","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0507 19:54:54.845626    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:54:54.845913    5068 round_trippers.go:469] Request Headers:
	I0507 19:54:54.846010    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:54:54.846010    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:54:54.850313    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:54:54.850313    5068 round_trippers.go:577] Response Headers:
	I0507 19:54:54.850313    5068 round_trippers.go:580]     Audit-Id: d7226798-81dc-4fad-9ab7-b0b34cb9e3cc
	I0507 19:54:54.850313    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:54:54.850313    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:54:54.850313    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:54:54.850313    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:54:54.850313    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:54:55 GMT
	I0507 19:54:54.851200    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1793","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0507 19:54:55.347179    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:54:55.347179    5068 round_trippers.go:469] Request Headers:
	I0507 19:54:55.347179    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:54:55.347404    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:54:55.350452    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:54:55.350452    5068 round_trippers.go:577] Response Headers:
	I0507 19:54:55.350452    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:54:55.350452    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:54:55.350452    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:54:55 GMT
	I0507 19:54:55.350452    5068 round_trippers.go:580]     Audit-Id: 7ec2c10e-019a-4936-8f45-9d79ee85dac5
	I0507 19:54:55.350452    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:54:55.350452    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:54:55.351779    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1793","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0507 19:54:55.847469    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:54:55.847469    5068 round_trippers.go:469] Request Headers:
	I0507 19:54:55.847586    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:54:55.847586    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:54:55.851007    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:54:55.851791    5068 round_trippers.go:577] Response Headers:
	I0507 19:54:55.851791    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:54:55.851791    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:54:56 GMT
	I0507 19:54:55.851791    5068 round_trippers.go:580]     Audit-Id: d48733e4-a849-420a-ae3d-3c453bc81998
	I0507 19:54:55.851791    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:54:55.851791    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:54:55.851791    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:54:55.851791    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1793","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0507 19:54:56.349174    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:54:56.349174    5068 round_trippers.go:469] Request Headers:
	I0507 19:54:56.349174    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:54:56.349385    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:54:56.353057    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:54:56.353057    5068 round_trippers.go:577] Response Headers:
	I0507 19:54:56.353057    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:54:56.353057    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:54:56.353057    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:54:56.353057    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:54:56 GMT
	I0507 19:54:56.353057    5068 round_trippers.go:580]     Audit-Id: e4c5784c-e5fa-42cf-a6b6-f093181e7c4e
	I0507 19:54:56.353330    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:54:56.353394    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1793","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0507 19:54:56.354380    5068 node_ready.go:53] node "multinode-600000" has status "Ready":"False"
	I0507 19:54:56.849635    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:54:56.849635    5068 round_trippers.go:469] Request Headers:
	I0507 19:54:56.849969    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:54:56.850069    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:54:56.853465    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:54:56.853465    5068 round_trippers.go:577] Response Headers:
	I0507 19:54:56.853465    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:54:56.853465    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:54:56.853465    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:54:57 GMT
	I0507 19:54:56.853465    5068 round_trippers.go:580]     Audit-Id: 559d7811-b41f-43bf-9e70-03779b3fc367
	I0507 19:54:56.853465    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:54:56.853465    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:54:56.854320    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1793","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0507 19:54:57.335976    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:54:57.335976    5068 round_trippers.go:469] Request Headers:
	I0507 19:54:57.336229    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:54:57.336229    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:54:57.338418    5068 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:54:57.339483    5068 round_trippers.go:577] Response Headers:
	I0507 19:54:57.339483    5068 round_trippers.go:580]     Audit-Id: 23b64865-8f68-4490-916c-eb427ec7e2e0
	I0507 19:54:57.339483    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:54:57.339483    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:54:57.339483    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:54:57.339483    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:54:57.339483    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:54:57 GMT
	I0507 19:54:57.339852    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1793","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0507 19:54:57.851316    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:54:57.851316    5068 round_trippers.go:469] Request Headers:
	I0507 19:54:57.851316    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:54:57.851316    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:54:57.854883    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:54:57.855343    5068 round_trippers.go:577] Response Headers:
	I0507 19:54:57.855343    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:54:57.855343    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:54:57.855343    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:54:58 GMT
	I0507 19:54:57.855393    5068 round_trippers.go:580]     Audit-Id: e169f590-4e5a-45fa-97bc-744f7120913f
	I0507 19:54:57.855393    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:54:57.855393    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:54:57.855393    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1793","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0507 19:54:58.337589    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:54:58.337652    5068 round_trippers.go:469] Request Headers:
	I0507 19:54:58.337652    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:54:58.337652    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:54:58.341495    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:54:58.341703    5068 round_trippers.go:577] Response Headers:
	I0507 19:54:58.341825    5068 round_trippers.go:580]     Audit-Id: fb198564-7437-4582-93bf-2471c82e9ace
	I0507 19:54:58.341825    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:54:58.341825    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:54:58.341892    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:54:58.341892    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:54:58.341892    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:54:58 GMT
	I0507 19:54:58.342226    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1793","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0507 19:54:58.839028    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:54:58.839028    5068 round_trippers.go:469] Request Headers:
	I0507 19:54:58.839028    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:54:58.839028    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:54:58.842625    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:54:58.842625    5068 round_trippers.go:577] Response Headers:
	I0507 19:54:58.842625    5068 round_trippers.go:580]     Audit-Id: 14837fd9-0a54-4fef-9695-e9c8af453517
	I0507 19:54:58.842625    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:54:58.842625    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:54:58.842625    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:54:58.842859    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:54:58.842859    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:54:59 GMT
	I0507 19:54:58.843170    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1793","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0507 19:54:58.843308    5068 node_ready.go:53] node "multinode-600000" has status "Ready":"False"
	I0507 19:54:59.339536    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:54:59.339831    5068 round_trippers.go:469] Request Headers:
	I0507 19:54:59.339908    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:54:59.339908    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:54:59.343371    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:54:59.343371    5068 round_trippers.go:577] Response Headers:
	I0507 19:54:59.343760    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:54:59.343760    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:54:59.343760    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:54:59 GMT
	I0507 19:54:59.343760    5068 round_trippers.go:580]     Audit-Id: 3cf5ca08-8ee8-4481-9865-4fd06975314b
	I0507 19:54:59.343760    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:54:59.343760    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:54:59.344027    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1793","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0507 19:54:59.838233    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:54:59.838233    5068 round_trippers.go:469] Request Headers:
	I0507 19:54:59.838233    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:54:59.838233    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:54:59.841526    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:54:59.841526    5068 round_trippers.go:577] Response Headers:
	I0507 19:54:59.841652    5068 round_trippers.go:580]     Audit-Id: 7142b85e-6300-45f1-9e73-c1cd04df25b3
	I0507 19:54:59.841652    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:54:59.841652    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:54:59.841652    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:54:59.841652    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:54:59.841652    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:00 GMT
	I0507 19:54:59.841840    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1793","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0507 19:55:00.339290    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:00.339290    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:00.339290    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:00.339290    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:00.343244    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:55:00.343244    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:00.343244    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:00.343244    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:00.343244    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:00.343244    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:00.343244    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:00 GMT
	I0507 19:55:00.343244    5068 round_trippers.go:580]     Audit-Id: b4c82749-f98f-45d9-abb3-e05a3106c7c6
	I0507 19:55:00.343244    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1793","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0507 19:55:00.840709    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:00.841010    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:00.841010    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:00.841108    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:00.845869    5068 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:55:00.845869    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:00.845869    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:00.845869    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:00.845869    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:01 GMT
	I0507 19:55:00.845869    5068 round_trippers.go:580]     Audit-Id: a6aee01e-d132-4093-a596-8b938ab230b2
	I0507 19:55:00.845869    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:00.845869    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:00.846850    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1793","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0507 19:55:00.847661    5068 node_ready.go:53] node "multinode-600000" has status "Ready":"False"
	I0507 19:55:01.336642    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:01.336972    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:01.336972    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:01.337065    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:01.339776    5068 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:55:01.340784    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:01.340784    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:01.340784    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:01.340784    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:01.340784    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:01 GMT
	I0507 19:55:01.340784    5068 round_trippers.go:580]     Audit-Id: 6167c09c-11e2-478f-928a-ae746331a11a
	I0507 19:55:01.340784    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:01.340986    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1793","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0507 19:55:01.850031    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:01.850031    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:01.850031    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:01.850031    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:01.856917    5068 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0507 19:55:01.856917    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:01.856917    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:01.856917    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:02 GMT
	I0507 19:55:01.856917    5068 round_trippers.go:580]     Audit-Id: 8b321ea2-bd7d-4457-986c-5e3c7f448ee3
	I0507 19:55:01.856917    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:01.856917    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:01.856917    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:01.856917    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1793","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0507 19:55:02.349929    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:02.349929    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:02.349929    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:02.349929    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:02.353531    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:55:02.353531    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:02.353531    5068 round_trippers.go:580]     Audit-Id: ef30abed-ebd5-404a-b3c4-956ea9ae4381
	I0507 19:55:02.353531    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:02.353531    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:02.353531    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:02.353943    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:02.353943    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:02 GMT
	I0507 19:55:02.354329    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1793","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0507 19:55:02.849132    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:02.849211    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:02.849211    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:02.849276    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:02.855011    5068 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 19:55:02.855554    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:02.855554    5068 round_trippers.go:580]     Audit-Id: a422289f-cb7c-40b2-bbaa-3d930a9086ac
	I0507 19:55:02.855554    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:02.855554    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:02.855554    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:02.855554    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:02.855554    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:03 GMT
	I0507 19:55:02.855734    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1793","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0507 19:55:02.855734    5068 node_ready.go:53] node "multinode-600000" has status "Ready":"False"
	I0507 19:55:03.349898    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:03.350129    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:03.350129    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:03.350129    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:03.353496    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:55:03.353496    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:03.353496    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:03 GMT
	I0507 19:55:03.353496    5068 round_trippers.go:580]     Audit-Id: 98e9921f-8cf8-41b2-acb3-85c15fb6b4b0
	I0507 19:55:03.353496    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:03.353496    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:03.353496    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:03.353496    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:03.354628    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1793","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0507 19:55:03.847158    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:03.847158    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:03.847158    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:03.847158    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:03.852049    5068 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:55:03.852049    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:03.852049    5068 round_trippers.go:580]     Audit-Id: a2e7198f-708b-47a5-8467-26e1aab3d74c
	I0507 19:55:03.852049    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:03.852049    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:03.852049    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:03.852049    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:03.852049    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:04 GMT
	I0507 19:55:03.852049    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1793","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0507 19:55:04.348944    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:04.349040    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:04.349040    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:04.349040    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:04.354481    5068 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 19:55:04.354554    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:04.354621    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:04 GMT
	I0507 19:55:04.354621    5068 round_trippers.go:580]     Audit-Id: 8c709bda-4429-4ccd-9142-f72e8037587a
	I0507 19:55:04.354621    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:04.354621    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:04.354679    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:04.354679    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:04.354961    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1793","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0507 19:55:04.847941    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:04.847941    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:04.847941    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:04.847941    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:04.852539    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:55:04.852539    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:04.852628    5068 round_trippers.go:580]     Audit-Id: a71cc345-f387-4de6-983d-cf801f36271d
	I0507 19:55:04.852628    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:04.852628    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:04.852628    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:04.852628    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:04.852628    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:05 GMT
	I0507 19:55:04.852856    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1793","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0507 19:55:05.348085    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:05.348085    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:05.348085    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:05.348085    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:05.350695    5068 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:55:05.351672    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:05.351672    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:05.351672    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:05 GMT
	I0507 19:55:05.351672    5068 round_trippers.go:580]     Audit-Id: df267aca-a40c-4b28-aee4-6fabceb7299b
	I0507 19:55:05.351672    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:05.351672    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:05.351672    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:05.351786    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1793","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0507 19:55:05.352597    5068 node_ready.go:53] node "multinode-600000" has status "Ready":"False"
	I0507 19:55:05.844965    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:05.845035    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:05.845035    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:05.845035    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:05.849421    5068 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:55:05.849421    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:05.849421    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:05.849421    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:06 GMT
	I0507 19:55:05.849930    5068 round_trippers.go:580]     Audit-Id: 5ce6c469-be70-4ae9-bf18-1f122b690c76
	I0507 19:55:05.849930    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:05.849930    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:05.849986    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:05.850088    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1793","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0507 19:55:06.344992    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:06.344992    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:06.344992    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:06.344992    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:06.348716    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:55:06.348716    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:06.348716    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:06.348716    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:06 GMT
	I0507 19:55:06.348716    5068 round_trippers.go:580]     Audit-Id: 396799ee-37bd-45de-a683-ff48513ff3d8
	I0507 19:55:06.348716    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:06.348716    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:06.348716    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:06.349155    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1793","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0507 19:55:06.845792    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:06.845792    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:06.845890    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:06.845890    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:06.852218    5068 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0507 19:55:06.852218    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:06.852218    5068 round_trippers.go:580]     Audit-Id: 47ab07b1-232d-4c89-8a91-0328651a1860
	I0507 19:55:06.852218    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:06.852218    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:06.852218    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:06.852218    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:06.852218    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:07 GMT
	I0507 19:55:06.852516    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1793","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0507 19:55:07.343920    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:07.343987    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:07.344053    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:07.344053    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:07.351578    5068 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0507 19:55:07.351578    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:07.351578    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:07.351578    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:07.351578    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:07.351578    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:07.351578    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:07 GMT
	I0507 19:55:07.351578    5068 round_trippers.go:580]     Audit-Id: 7b0a3ed5-81cd-4565-9a99-a0a9fff28dd5
	I0507 19:55:07.352159    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1793","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0507 19:55:07.845246    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:07.845246    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:07.845246    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:07.845681    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:07.849084    5068 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:55:07.849154    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:07.849154    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:07.849154    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:07.849154    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:07.849154    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:08 GMT
	I0507 19:55:07.849154    5068 round_trippers.go:580]     Audit-Id: f5071e63-5a4d-4922-923d-0d2348635624
	I0507 19:55:07.849154    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:07.849452    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1793","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0507 19:55:07.849909    5068 node_ready.go:53] node "multinode-600000" has status "Ready":"False"
	I0507 19:55:08.344352    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:08.344426    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:08.344426    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:08.344426    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:08.347712    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:55:08.347712    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:08.347712    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:08.347712    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:08.347712    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:08.347712    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:08 GMT
	I0507 19:55:08.347712    5068 round_trippers.go:580]     Audit-Id: 817ba9d8-5dcb-4ec6-a7e9-1bfe81869bfb
	I0507 19:55:08.347712    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:08.348176    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1793","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0507 19:55:08.845262    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:08.845262    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:08.845262    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:08.845262    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:08.848278    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:55:08.848278    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:08.848278    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:08.848278    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:08.848278    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:09 GMT
	I0507 19:55:08.848278    5068 round_trippers.go:580]     Audit-Id: 04caf244-df00-4eca-9770-8c62183ae62c
	I0507 19:55:08.848278    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:08.848278    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:08.849277    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1793","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5581 chars]
	I0507 19:55:09.345327    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:09.345327    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:09.345327    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:09.345327    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:09.349234    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:55:09.349234    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:09.349234    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:09.349234    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:09.349234    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:09.349234    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:09 GMT
	I0507 19:55:09.349234    5068 round_trippers.go:580]     Audit-Id: 039fc18d-9e72-4fbc-abec-13611c6888b9
	I0507 19:55:09.349234    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:09.349234    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1835","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0507 19:55:09.350376    5068 node_ready.go:49] node "multinode-600000" has status "Ready":"True"
	I0507 19:55:09.350444    5068 node_ready.go:38] duration metric: took 31.0142172s for node "multinode-600000" to be "Ready" ...
	I0507 19:55:09.350531    5068 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0507 19:55:09.350712    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods
	I0507 19:55:09.350712    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:09.350712    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:09.350805    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:09.355671    5068 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:55:09.355722    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:09.355722    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:09.355722    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:09.355722    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:09.355722    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:09.355722    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:09 GMT
	I0507 19:55:09.355722    5068 round_trippers.go:580]     Audit-Id: 576b9274-6ecc-4b30-b616-34358afaaf78
	I0507 19:55:09.357720    5068 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1835"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"1756","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86566 chars]
	I0507 19:55:09.360891    5068 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-5j966" in "kube-system" namespace to be "Ready" ...
	I0507 19:55:09.361457    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5j966
	I0507 19:55:09.361457    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:09.361457    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:09.361457    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:09.363734    5068 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:55:09.363734    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:09.363734    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:09.363734    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:09.363734    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:09 GMT
	I0507 19:55:09.363734    5068 round_trippers.go:580]     Audit-Id: 51661003-ec1d-4718-8e76-f9f1aeb10ea8
	I0507 19:55:09.363734    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:09.363734    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:09.364596    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"1756","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0507 19:55:09.365136    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:09.365136    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:09.365136    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:09.365136    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:09.367735    5068 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:55:09.367735    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:09.367735    5068 round_trippers.go:580]     Audit-Id: c850e72b-e25b-4bd0-ba61-b88dd1081803
	I0507 19:55:09.367735    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:09.367735    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:09.367735    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:09.367735    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:09.367735    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:09 GMT
	I0507 19:55:09.368326    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1835","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0507 19:55:09.875945    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5j966
	I0507 19:55:09.875945    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:09.875945    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:09.875945    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:09.879564    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:55:09.880016    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:09.880016    5068 round_trippers.go:580]     Audit-Id: 4d8365c4-e2e0-44be-9f9a-a7302d6394ac
	I0507 19:55:09.880016    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:09.880016    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:09.880016    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:09.880016    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:09.880016    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:10 GMT
	I0507 19:55:09.880245    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"1756","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0507 19:55:09.880971    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:09.880971    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:09.880971    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:09.880971    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:09.883053    5068 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:55:09.883053    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:09.883053    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:09.883053    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:09.883053    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:09.884095    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:10 GMT
	I0507 19:55:09.884133    5068 round_trippers.go:580]     Audit-Id: 3099c473-d4f1-4981-9d10-f6b705898903
	I0507 19:55:09.884178    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:09.884178    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1835","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0507 19:55:10.362524    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5j966
	I0507 19:55:10.362612    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:10.362612    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:10.362612    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:10.365010    5068 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:55:10.366017    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:10.366067    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:10.366067    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:10.366067    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:10.366067    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:10.366067    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:10 GMT
	I0507 19:55:10.366067    5068 round_trippers.go:580]     Audit-Id: d08a40fc-6423-427c-a14a-ce9009f60194
	I0507 19:55:10.366300    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"1756","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0507 19:55:10.367339    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:10.367339    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:10.367436    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:10.367436    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:10.372098    5068 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:55:10.372098    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:10.372098    5068 round_trippers.go:580]     Audit-Id: ad3ea9da-3c83-4a1c-9e27-b1d7eb5e57bd
	I0507 19:55:10.372098    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:10.372098    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:10.372098    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:10.372098    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:10.372098    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:10 GMT
	I0507 19:55:10.372098    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1835","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0507 19:55:10.875448    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5j966
	I0507 19:55:10.875532    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:10.875532    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:10.875532    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:10.881160    5068 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 19:55:10.881160    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:10.881160    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:10.881160    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:10.881160    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:10.881160    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:11 GMT
	I0507 19:55:10.881160    5068 round_trippers.go:580]     Audit-Id: a647a1c5-50c2-4380-8594-a23e2d432ce7
	I0507 19:55:10.881160    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:10.881160    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"1756","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0507 19:55:10.882353    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:10.882353    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:10.882416    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:10.882416    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:10.885655    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:55:10.885749    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:10.885749    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:10.885749    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:10.885749    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:10.885749    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:11 GMT
	I0507 19:55:10.885749    5068 round_trippers.go:580]     Audit-Id: 6a96dbf7-6512-4445-9530-7dae4824e23f
	I0507 19:55:10.885749    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:10.885749    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1835","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5358 chars]
	I0507 19:55:11.372710    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5j966
	I0507 19:55:11.372710    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:11.372710    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:11.372710    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:11.377276    5068 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:55:11.377276    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:11.377276    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:11.377276    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:11.377276    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:11.377276    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:11 GMT
	I0507 19:55:11.377276    5068 round_trippers.go:580]     Audit-Id: 9ccb2256-e445-4f4b-b17b-d96badad862f
	I0507 19:55:11.377276    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:11.377800    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"1756","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0507 19:55:11.378438    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:11.378515    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:11.378515    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:11.378561    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:11.383236    5068 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:55:11.383236    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:11.383236    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:11.383236    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:11.383236    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:11 GMT
	I0507 19:55:11.383236    5068 round_trippers.go:580]     Audit-Id: fcee5a8a-3568-4c59-ae35-c2a87cd81efc
	I0507 19:55:11.383236    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:11.383236    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:11.383863    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1836","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0507 19:55:11.384112    5068 pod_ready.go:102] pod "coredns-7db6d8ff4d-5j966" in "kube-system" namespace has status "Ready":"False"
	I0507 19:55:11.873792    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5j966
	I0507 19:55:11.874025    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:11.874025    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:11.874025    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:11.878853    5068 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:55:11.879200    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:11.879200    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:11.879200    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:11.879200    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:11.879274    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:12 GMT
	I0507 19:55:11.879274    5068 round_trippers.go:580]     Audit-Id: 1b86976d-e36a-4afe-9be7-5f4695217931
	I0507 19:55:11.879274    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:11.879274    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"1756","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0507 19:55:11.880221    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:11.880344    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:11.880344    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:11.880344    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:11.883744    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:55:11.883744    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:11.883744    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:11.883744    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:12 GMT
	I0507 19:55:11.883744    5068 round_trippers.go:580]     Audit-Id: 3e096447-c3a1-49a3-ad2a-1ab2d5a09024
	I0507 19:55:11.883744    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:11.883744    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:11.883744    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:11.883744    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1836","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0507 19:55:12.374155    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5j966
	I0507 19:55:12.374286    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:12.374286    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:12.374286    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:12.377682    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:55:12.377682    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:12.377682    5068 round_trippers.go:580]     Audit-Id: 9a913910-9553-445e-a9c7-d04310fe8ff0
	I0507 19:55:12.377682    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:12.377682    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:12.377682    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:12.378246    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:12.378246    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:12 GMT
	I0507 19:55:12.378535    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"1756","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0507 19:55:12.379484    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:12.379608    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:12.379608    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:12.379608    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:12.384804    5068 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 19:55:12.385343    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:12.385343    5068 round_trippers.go:580]     Audit-Id: fa774f8f-9838-4c9c-89c8-22a899e85856
	I0507 19:55:12.385343    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:12.385343    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:12.385343    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:12.385343    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:12.385343    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:12 GMT
	I0507 19:55:12.385444    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1836","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0507 19:55:12.873079    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5j966
	I0507 19:55:12.873079    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:12.873079    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:12.873419    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:12.876768    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:55:12.877298    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:12.877298    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:12.877298    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:13 GMT
	I0507 19:55:12.877298    5068 round_trippers.go:580]     Audit-Id: ad6179d3-e95b-44c5-84ad-6716c81247e4
	I0507 19:55:12.877298    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:12.877298    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:12.877298    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:12.877298    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"1756","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0507 19:55:12.878737    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:12.878819    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:12.878819    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:12.878819    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:12.882106    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:55:12.882106    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:12.882106    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:13 GMT
	I0507 19:55:12.882106    5068 round_trippers.go:580]     Audit-Id: b2fd47bf-9021-4dc6-a660-7370cbacedba
	I0507 19:55:12.882106    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:12.882404    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:12.882404    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:12.882404    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:12.882693    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1836","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0507 19:55:13.373278    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5j966
	I0507 19:55:13.373278    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:13.373278    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:13.373278    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:13.375940    5068 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:55:13.375940    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:13.375940    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:13 GMT
	I0507 19:55:13.375940    5068 round_trippers.go:580]     Audit-Id: 6569f388-308c-45ee-bac5-9d0ddc6d0169
	I0507 19:55:13.375940    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:13.375940    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:13.375940    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:13.375940    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:13.376894    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"1756","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0507 19:55:13.377916    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:13.377995    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:13.377995    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:13.377995    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:13.380642    5068 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:55:13.381102    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:13.381102    5068 round_trippers.go:580]     Audit-Id: ebb20549-e548-4987-ab9a-b303e44a76b1
	I0507 19:55:13.381102    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:13.381102    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:13.381102    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:13.381102    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:13.381102    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:13 GMT
	I0507 19:55:13.381394    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1836","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0507 19:55:13.871403    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5j966
	I0507 19:55:13.871403    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:13.871403    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:13.871403    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:13.874982    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:55:13.875072    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:13.875072    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:13.875072    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:13.875072    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:14 GMT
	I0507 19:55:13.875072    5068 round_trippers.go:580]     Audit-Id: d4f477ed-7b4f-4be2-8e41-bd3eb2491c4f
	I0507 19:55:13.875072    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:13.875072    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:13.875336    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"1756","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0507 19:55:13.876097    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:13.876097    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:13.876097    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:13.876097    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:13.878267    5068 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:55:13.878267    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:13.878267    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:13.878267    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:14 GMT
	I0507 19:55:13.878267    5068 round_trippers.go:580]     Audit-Id: be7c1ec1-56df-4bce-9c08-bec6ef0d0587
	I0507 19:55:13.878267    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:13.878267    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:13.878267    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:13.879287    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1836","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0507 19:55:13.879453    5068 pod_ready.go:102] pod "coredns-7db6d8ff4d-5j966" in "kube-system" namespace has status "Ready":"False"
	I0507 19:55:14.373674    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5j966
	I0507 19:55:14.373767    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:14.373767    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:14.373767    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:14.378501    5068 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:55:14.378501    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:14.378501    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:14.378501    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:14.378501    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:14.378501    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:14.378501    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:14 GMT
	I0507 19:55:14.379030    5068 round_trippers.go:580]     Audit-Id: bdeabc66-205d-4d5e-b5f4-230a2a579a0f
	I0507 19:55:14.379374    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"1756","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0507 19:55:14.380393    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:14.380393    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:14.380393    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:14.380393    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:14.382197    5068 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0507 19:55:14.383152    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:14.383152    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:14.383152    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:14.383152    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:14.383152    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:14.383152    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:14 GMT
	I0507 19:55:14.383152    5068 round_trippers.go:580]     Audit-Id: efe27174-e13d-415d-a277-2abbdd19f203
	I0507 19:55:14.383152    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1836","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0507 19:55:14.872459    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5j966
	I0507 19:55:14.872459    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:14.872459    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:14.872459    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:14.877854    5068 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 19:55:14.877854    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:14.877854    5068 round_trippers.go:580]     Audit-Id: 50b7b1a9-7261-487c-8597-18744258e3bf
	I0507 19:55:14.877964    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:14.877964    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:14.877964    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:14.877964    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:14.877964    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:15 GMT
	I0507 19:55:14.878051    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"1756","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0507 19:55:14.879173    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:14.879240    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:14.879240    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:14.879240    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:14.884438    5068 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 19:55:14.884520    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:14.884520    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:14.884520    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:14.884520    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:15 GMT
	I0507 19:55:14.884520    5068 round_trippers.go:580]     Audit-Id: 141743ce-e521-4544-84a8-1a306e6fbed9
	I0507 19:55:14.884520    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:14.884520    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:14.884638    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1836","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0507 19:55:15.373886    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5j966
	I0507 19:55:15.373886    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:15.373886    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:15.373886    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:15.377336    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:55:15.377336    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:15.377336    5068 round_trippers.go:580]     Audit-Id: 436e7ca4-a063-402c-911a-7c8429cd6bb3
	I0507 19:55:15.377447    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:15.377447    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:15.377447    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:15.377447    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:15.377447    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:15 GMT
	I0507 19:55:15.377642    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"1756","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0507 19:55:15.378324    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:15.378411    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:15.378411    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:15.378411    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:15.382655    5068 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:55:15.382655    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:15.382655    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:15.382655    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:15.382655    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:15.382655    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:15 GMT
	I0507 19:55:15.382655    5068 round_trippers.go:580]     Audit-Id: 3dfe6f1c-45f6-4211-8822-ce35ec58accb
	I0507 19:55:15.382655    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:15.383085    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1836","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0507 19:55:15.873610    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5j966
	I0507 19:55:15.873839    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:15.873839    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:15.873902    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:15.877744    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:55:15.877744    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:15.877744    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:15.877744    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:15.877744    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:15.877744    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:16 GMT
	I0507 19:55:15.877744    5068 round_trippers.go:580]     Audit-Id: 164a6573-14ed-420e-b40e-14a7bda5040d
	I0507 19:55:15.877744    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:15.877744    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"1756","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0507 19:55:15.878824    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:15.878824    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:15.878918    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:15.878918    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:15.882048    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:55:15.882048    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:15.882048    5068 round_trippers.go:580]     Audit-Id: 7d16190a-84ba-4f09-b41d-70a1d4800e74
	I0507 19:55:15.882048    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:15.882048    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:15.882048    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:15.882048    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:15.882525    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:16 GMT
	I0507 19:55:15.882525    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1836","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0507 19:55:15.882525    5068 pod_ready.go:102] pod "coredns-7db6d8ff4d-5j966" in "kube-system" namespace has status "Ready":"False"
	I0507 19:55:16.373143    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5j966
	I0507 19:55:16.373143    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:16.373143    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:16.373143    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:16.376899    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:55:16.376899    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:16.376899    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:16.376899    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:16.376899    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:16.376899    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:16 GMT
	I0507 19:55:16.376899    5068 round_trippers.go:580]     Audit-Id: 58fb7775-c75b-493d-93b8-4f66090fb416
	I0507 19:55:16.376899    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:16.377608    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"1756","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0507 19:55:16.378457    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:16.378457    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:16.378457    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:16.378457    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:16.381635    5068 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:55:16.381679    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:16.381738    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:16.381778    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:16.381816    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:16 GMT
	I0507 19:55:16.381816    5068 round_trippers.go:580]     Audit-Id: be64c9c6-22ea-44d8-bc72-7a4898827921
	I0507 19:55:16.381816    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:16.381816    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:16.382230    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1836","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0507 19:55:16.877190    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5j966
	I0507 19:55:16.877280    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:16.877280    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:16.877362    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:16.880650    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:55:16.881637    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:16.881659    5068 round_trippers.go:580]     Audit-Id: 4e5e3c98-f6b1-4875-9b0d-6b01e9db1559
	I0507 19:55:16.881659    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:16.881659    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:16.881659    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:16.881659    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:16.881659    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:17 GMT
	I0507 19:55:16.881870    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"1756","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0507 19:55:16.882684    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:16.882748    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:16.882748    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:16.882748    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:16.887639    5068 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:55:16.887639    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:16.888671    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:16.888671    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:16.888693    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:17 GMT
	I0507 19:55:16.888693    5068 round_trippers.go:580]     Audit-Id: 58ac507d-1dce-481c-851a-973fa4daa78b
	I0507 19:55:16.888693    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:16.888693    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:16.888834    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1836","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0507 19:55:17.375479    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5j966
	I0507 19:55:17.375479    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:17.375479    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:17.375479    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:17.378439    5068 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:55:17.378439    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:17.378439    5068 round_trippers.go:580]     Audit-Id: 11c20e8c-5ff9-4f78-9deb-21f7309c0ebe
	I0507 19:55:17.378439    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:17.378439    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:17.378439    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:17.378439    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:17.378439    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:17 GMT
	I0507 19:55:17.379437    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"1756","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0507 19:55:17.379437    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:17.380429    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:17.380429    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:17.380429    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:17.389429    5068 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0507 19:55:17.389429    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:17.389429    5068 round_trippers.go:580]     Audit-Id: 15860ed1-ec51-4bd7-9885-697bf7b47d60
	I0507 19:55:17.389429    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:17.389429    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:17.389429    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:17.389429    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:17.389429    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:17 GMT
	I0507 19:55:17.389884    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1836","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0507 19:55:17.872605    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5j966
	I0507 19:55:17.872605    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:17.872605    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:17.872605    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:17.876885    5068 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:55:17.877272    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:17.877272    5068 round_trippers.go:580]     Audit-Id: d3390891-392c-4b2e-95f9-e191c8080bbd
	I0507 19:55:17.877272    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:17.877272    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:17.877272    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:17.877272    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:17.877272    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:18 GMT
	I0507 19:55:17.877678    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"1756","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0507 19:55:17.878744    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:17.878744    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:17.878744    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:17.878744    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:17.881553    5068 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:55:17.881740    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:17.881740    5068 round_trippers.go:580]     Audit-Id: 34fcc5eb-9f24-442d-a4b1-dbc1872ddb56
	I0507 19:55:17.881740    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:17.881740    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:17.881740    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:17.881740    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:17.881740    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:18 GMT
	I0507 19:55:17.881941    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1836","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0507 19:55:18.371583    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5j966
	I0507 19:55:18.371583    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:18.371840    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:18.371840    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:18.375100    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:55:18.375100    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:18.375100    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:18.375100    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:18.375100    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:18.375100    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:18 GMT
	I0507 19:55:18.375100    5068 round_trippers.go:580]     Audit-Id: b1b96eed-8aa6-41a8-91b8-c89fa6573a1d
	I0507 19:55:18.375100    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:18.375971    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"1756","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0507 19:55:18.376774    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:18.376851    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:18.376851    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:18.376851    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:18.378972    5068 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:55:18.378972    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:18.378972    5068 round_trippers.go:580]     Audit-Id: c8d49bbe-2c6c-4590-97e1-fb7d8ecb67a0
	I0507 19:55:18.378972    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:18.378972    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:18.378972    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:18.378972    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:18.378972    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:18 GMT
	I0507 19:55:18.379999    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1836","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0507 19:55:18.380743    5068 pod_ready.go:102] pod "coredns-7db6d8ff4d-5j966" in "kube-system" namespace has status "Ready":"False"
	I0507 19:55:18.862121    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5j966
	I0507 19:55:18.862121    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:18.862121    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:18.862394    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:18.866098    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:55:18.866098    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:18.866098    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:18.866098    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:18.866098    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:18.866098    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:18.866098    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:19 GMT
	I0507 19:55:18.866098    5068 round_trippers.go:580]     Audit-Id: 21fcd2a7-8f9f-48a7-ac99-43471f3155c2
	I0507 19:55:18.866098    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"1756","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0507 19:55:18.867483    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:18.867565    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:18.867565    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:18.867679    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:18.870022    5068 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:55:18.870494    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:18.870494    5068 round_trippers.go:580]     Audit-Id: 75f74b3f-2ec6-40fd-b996-0d1bff7dca8c
	I0507 19:55:18.870494    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:18.870494    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:18.870494    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:18.870494    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:18.870494    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:19 GMT
	I0507 19:55:18.870494    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1836","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0507 19:55:19.365234    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5j966
	I0507 19:55:19.365234    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:19.365344    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:19.365344    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:19.370219    5068 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:55:19.370219    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:19.370219    5068 round_trippers.go:580]     Audit-Id: 3b44395d-79ed-4080-ac32-db8392217173
	I0507 19:55:19.370219    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:19.370219    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:19.370219    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:19.370219    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:19.370219    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:19 GMT
	I0507 19:55:19.370586    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"1756","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0507 19:55:19.371770    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:19.371770    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:19.371840    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:19.371840    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:19.374111    5068 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:55:19.374111    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:19.374787    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:19.374787    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:19.374787    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:19.374787    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:19 GMT
	I0507 19:55:19.374860    5068 round_trippers.go:580]     Audit-Id: dfbb8ef8-c703-438f-a6d7-6c5fdf7bb055
	I0507 19:55:19.374894    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:19.374923    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1836","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0507 19:55:19.865478    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5j966
	I0507 19:55:19.865478    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:19.865478    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:19.865478    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:19.870100    5068 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:55:19.870100    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:19.870100    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:20 GMT
	I0507 19:55:19.870100    5068 round_trippers.go:580]     Audit-Id: 9c7fac1e-8101-45da-adb8-dc8ebbe27aed
	I0507 19:55:19.870100    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:19.870100    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:19.870100    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:19.870100    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:19.870100    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"1756","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0507 19:55:19.871429    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:19.871429    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:19.871494    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:19.871494    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:19.874238    5068 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:55:19.874238    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:19.874238    5068 round_trippers.go:580]     Audit-Id: 6b6ce56d-3a1a-4403-9bdc-9f564381aa22
	I0507 19:55:19.874238    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:19.874238    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:19.874238    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:19.874238    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:19.874238    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:20 GMT
	I0507 19:55:19.875185    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1836","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0507 19:55:20.363561    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5j966
	I0507 19:55:20.363561    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:20.363688    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:20.363688    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:20.367018    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:55:20.367018    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:20.367018    5068 round_trippers.go:580]     Audit-Id: e9b52941-89c0-4af0-ae07-21748e908eaa
	I0507 19:55:20.367018    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:20.367018    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:20.367018    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:20.367018    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:20.367201    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:20 GMT
	I0507 19:55:20.367396    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"1756","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0507 19:55:20.367520    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:20.367520    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:20.367520    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:20.367520    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:20.370196    5068 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:55:20.370708    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:20.370708    5068 round_trippers.go:580]     Audit-Id: 5e98d8cd-8d9b-4583-96b7-3521c7f4b88f
	I0507 19:55:20.370708    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:20.370708    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:20.370708    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:20.370708    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:20.370708    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:20 GMT
	I0507 19:55:20.371017    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1836","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0507 19:55:20.864353    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5j966
	I0507 19:55:20.864436    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:20.864436    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:20.864436    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:20.867784    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:55:20.867784    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:20.867784    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:20.867784    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:20.868678    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:21 GMT
	I0507 19:55:20.868728    5068 round_trippers.go:580]     Audit-Id: 86d603a6-f40b-4f82-9cb3-ee956fd0680d
	I0507 19:55:20.868728    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:20.868728    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:20.868961    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"1756","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0507 19:55:20.870012    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:20.870085    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:20.870085    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:20.870175    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:20.872514    5068 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:55:20.872514    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:20.872514    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:20.872514    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:21 GMT
	I0507 19:55:20.872514    5068 round_trippers.go:580]     Audit-Id: 7a95e4ce-4f15-444e-9857-cbd4671c2640
	I0507 19:55:20.872514    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:20.872514    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:20.872514    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:20.873556    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1836","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0507 19:55:20.874003    5068 pod_ready.go:102] pod "coredns-7db6d8ff4d-5j966" in "kube-system" namespace has status "Ready":"False"
	I0507 19:55:21.376346    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5j966
	I0507 19:55:21.376441    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:21.376441    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:21.376441    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:21.379663    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:55:21.379663    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:21.379663    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:21 GMT
	I0507 19:55:21.379663    5068 round_trippers.go:580]     Audit-Id: 3249d448-d5b9-4293-881a-e3af7a06b2f9
	I0507 19:55:21.379663    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:21.379663    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:21.379663    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:21.379663    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:21.379663    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"1756","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0507 19:55:21.380430    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:21.380430    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:21.380493    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:21.380493    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:21.383235    5068 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:55:21.383364    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:21.383364    5068 round_trippers.go:580]     Audit-Id: 6789e2f7-63f6-4d8f-b7e2-71be34e4c841
	I0507 19:55:21.383364    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:21.383364    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:21.383364    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:21.383364    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:21.383364    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:21 GMT
	I0507 19:55:21.383581    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1836","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0507 19:55:21.876072    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5j966
	I0507 19:55:21.876198    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:21.876198    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:21.876198    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:21.882488    5068 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0507 19:55:21.882488    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:21.882488    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:22 GMT
	I0507 19:55:21.882488    5068 round_trippers.go:580]     Audit-Id: a72534ef-b622-4f48-be4c-9199efee7d71
	I0507 19:55:21.882488    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:21.882488    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:21.882488    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:21.882488    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:21.883117    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"1756","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0507 19:55:21.883850    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:21.883850    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:21.883850    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:21.883850    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:21.888332    5068 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:55:21.888332    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:21.888332    5068 round_trippers.go:580]     Audit-Id: d5c9a0ae-a37b-4331-bee1-b47aabb93d59
	I0507 19:55:21.888332    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:21.888332    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:21.888332    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:21.888332    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:21.888332    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:22 GMT
	I0507 19:55:21.888332    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1836","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0507 19:55:22.371676    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5j966
	I0507 19:55:22.372169    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:22.372169    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:22.372169    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:22.378997    5068 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0507 19:55:22.378997    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:22.378997    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:22.378997    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:22.378997    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:22.378997    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:22 GMT
	I0507 19:55:22.378997    5068 round_trippers.go:580]     Audit-Id: fa964a3f-994a-4b75-9d08-bc00e556db4c
	I0507 19:55:22.378997    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:22.378997    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"1756","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0507 19:55:22.380455    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:22.380455    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:22.380594    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:22.380594    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:22.382868    5068 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:55:22.382868    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:22.382868    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:22.382868    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:22.382868    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:22.382868    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:22.382868    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:22 GMT
	I0507 19:55:22.382868    5068 round_trippers.go:580]     Audit-Id: 9a3e4d34-3b66-4bef-9bea-90e8095d95c0
	I0507 19:55:22.383852    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1836","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0507 19:55:22.873085    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5j966
	I0507 19:55:22.873184    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:22.873184    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:22.873184    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:22.876623    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:55:22.876990    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:22.876990    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:22.876990    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:22.876990    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:22.876990    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:22.876990    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:23 GMT
	I0507 19:55:22.876990    5068 round_trippers.go:580]     Audit-Id: 95b5afb5-83f4-4177-9970-6f43ad43d49f
	I0507 19:55:22.876990    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"1756","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0507 19:55:22.878216    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:22.878216    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:22.878216    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:22.878216    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:22.880784    5068 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:55:22.881603    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:22.881681    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:23 GMT
	I0507 19:55:22.881681    5068 round_trippers.go:580]     Audit-Id: ffc704b4-92d4-4dda-aecc-4fce32ce2642
	I0507 19:55:22.881681    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:22.881681    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:22.881681    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:22.881681    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:22.881681    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1836","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0507 19:55:22.882210    5068 pod_ready.go:102] pod "coredns-7db6d8ff4d-5j966" in "kube-system" namespace has status "Ready":"False"
	I0507 19:55:23.372852    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5j966
	I0507 19:55:23.372971    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:23.372971    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:23.372971    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:23.379490    5068 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0507 19:55:23.379490    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:23.379490    5068 round_trippers.go:580]     Audit-Id: f349be0f-925c-42da-aef5-4f2263a18e75
	I0507 19:55:23.379490    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:23.379490    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:23.379490    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:23.379490    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:23.379490    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:23 GMT
	I0507 19:55:23.380043    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"1756","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0507 19:55:23.380237    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:23.380237    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:23.380237    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:23.380237    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:23.384048    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:55:23.384048    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:23.384302    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:23.384302    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:23.384302    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:23.384302    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:23.384335    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:23 GMT
	I0507 19:55:23.384335    5068 round_trippers.go:580]     Audit-Id: f35ae7fe-298c-41bc-b862-220215b22a82
	I0507 19:55:23.384461    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1836","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0507 19:55:23.875092    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5j966
	I0507 19:55:23.875092    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:23.875092    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:23.875092    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:23.878667    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:55:23.879035    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:23.879035    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:23.879035    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:23.879035    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:23.879035    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:24 GMT
	I0507 19:55:23.879035    5068 round_trippers.go:580]     Audit-Id: 2be02573-ca4a-4efa-bfbe-f105b1ba0bd2
	I0507 19:55:23.879035    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:23.879273    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"1756","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0507 19:55:23.879616    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:23.879616    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:23.879616    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:23.879616    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:23.883188    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:55:23.883188    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:23.883188    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:23.883188    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:23.883188    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:23.883188    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:24 GMT
	I0507 19:55:23.883188    5068 round_trippers.go:580]     Audit-Id: 7125ac3d-42f7-47e2-b862-bfd4ac05456c
	I0507 19:55:23.883188    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:23.883593    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1836","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0507 19:55:24.368961    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5j966
	I0507 19:55:24.368961    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:24.369139    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:24.369139    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:24.374078    5068 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:55:24.374078    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:24.374078    5068 round_trippers.go:580]     Audit-Id: f3968c11-fd44-42b5-b18c-7c842291b2a6
	I0507 19:55:24.374078    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:24.374168    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:24.374168    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:24.374168    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:24.374168    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:24 GMT
	I0507 19:55:24.374370    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"1756","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0507 19:55:24.374370    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:24.374370    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:24.374370    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:24.374370    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:24.381583    5068 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0507 19:55:24.381823    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:24.381823    5068 round_trippers.go:580]     Audit-Id: 2f4da85b-4a83-4587-849a-48d060ee880f
	I0507 19:55:24.381823    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:24.381823    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:24.381823    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:24.381823    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:24.381823    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:24 GMT
	I0507 19:55:24.382070    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1836","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0507 19:55:24.870468    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5j966
	I0507 19:55:24.870554    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:24.870554    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:24.870554    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:24.873537    5068 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:55:24.874262    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:24.874262    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:25 GMT
	I0507 19:55:24.874262    5068 round_trippers.go:580]     Audit-Id: 41d5466b-d605-4c8e-980c-0edc95aa33a5
	I0507 19:55:24.874262    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:24.874262    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:24.874262    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:24.874262    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:24.874642    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"1756","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0507 19:55:24.875820    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:24.875908    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:24.875908    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:24.875908    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:24.880377    5068 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:55:24.880377    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:24.880377    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:24.880377    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:25 GMT
	I0507 19:55:24.880455    5068 round_trippers.go:580]     Audit-Id: 5955ec14-66eb-4838-b574-f8443cff5d7e
	I0507 19:55:24.880455    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:24.880455    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:24.880455    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:24.880684    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1836","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0507 19:55:25.371770    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5j966
	I0507 19:55:25.371843    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:25.371843    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:25.371954    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:25.375572    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:55:25.375572    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:25.375572    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:25.375572    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:25.375572    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:25.375572    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:25 GMT
	I0507 19:55:25.375572    5068 round_trippers.go:580]     Audit-Id: e6377765-855d-4610-8bcd-4700bf56a371
	I0507 19:55:25.375572    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:25.375572    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"1756","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0507 19:55:25.377058    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:25.377058    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:25.377139    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:25.377139    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:25.379949    5068 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:55:25.380052    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:25.380052    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:25.380052    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:25.380052    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:25 GMT
	I0507 19:55:25.380052    5068 round_trippers.go:580]     Audit-Id: 1b9c5d3b-988a-49eb-b873-482da53446d0
	I0507 19:55:25.380052    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:25.380120    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:25.380150    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1836","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0507 19:55:25.380988    5068 pod_ready.go:102] pod "coredns-7db6d8ff4d-5j966" in "kube-system" namespace has status "Ready":"False"
	I0507 19:55:25.873665    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5j966
	I0507 19:55:25.873665    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:25.873665    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:25.873665    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:25.877345    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:55:25.877856    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:25.877856    5068 round_trippers.go:580]     Audit-Id: aef92fb5-83ef-4f8d-aef6-e2307085a952
	I0507 19:55:25.877856    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:25.877856    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:25.877856    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:25.877856    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:25.877856    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:26 GMT
	I0507 19:55:25.878243    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"1756","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0507 19:55:25.879212    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:25.879302    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:25.879302    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:25.879390    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:25.882337    5068 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:55:25.882337    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:25.882453    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:25.882453    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:25.882453    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:26 GMT
	I0507 19:55:25.882453    5068 round_trippers.go:580]     Audit-Id: 3795a3b2-1220-4194-ad06-46003e2ee010
	I0507 19:55:25.882453    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:25.882453    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:25.882595    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1836","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0507 19:55:26.377283    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5j966
	I0507 19:55:26.377283    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:26.377283    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:26.377283    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:26.381078    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:55:26.381199    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:26.381199    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:26.381199    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:26.381199    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:26 GMT
	I0507 19:55:26.381305    5068 round_trippers.go:580]     Audit-Id: aa45e9bf-6651-4c4d-ae98-056ae7e014fb
	I0507 19:55:26.381305    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:26.381358    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:26.381692    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"1756","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0507 19:55:26.382815    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:26.382815    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:26.382815    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:26.382815    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:26.387741    5068 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:55:26.387741    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:26.388283    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:26.388283    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:26.388283    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:26 GMT
	I0507 19:55:26.388283    5068 round_trippers.go:580]     Audit-Id: bcc4b40a-e8db-447c-b74f-29027e333301
	I0507 19:55:26.388283    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:26.388283    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:26.388579    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1836","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0507 19:55:26.864287    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5j966
	I0507 19:55:26.864287    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:26.864287    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:26.864287    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:26.869611    5068 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 19:55:26.869611    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:26.869611    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:26.869611    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:26.869611    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:27 GMT
	I0507 19:55:26.869611    5068 round_trippers.go:580]     Audit-Id: ecd8d3c6-13b8-40d9-9b12-0ad594c77c6a
	I0507 19:55:26.869611    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:26.869611    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:26.870131    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"1756","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0507 19:55:26.870813    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:26.870813    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:26.870884    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:26.870884    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:26.873054    5068 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:55:26.873054    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:26.873054    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:26.873054    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:26.873054    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:26.873054    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:27 GMT
	I0507 19:55:26.873054    5068 round_trippers.go:580]     Audit-Id: f92dd98e-2b4f-4277-b992-628c311f0f21
	I0507 19:55:26.873054    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:26.874019    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1836","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0507 19:55:27.377953    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5j966
	I0507 19:55:27.377953    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:27.377953    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:27.377953    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:27.381543    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:55:27.381543    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:27.381543    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:27.381543    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:27 GMT
	I0507 19:55:27.381543    5068 round_trippers.go:580]     Audit-Id: 38a443ad-f224-4d34-95ca-cb65ea17cc45
	I0507 19:55:27.381543    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:27.381543    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:27.381543    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:27.382413    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"1756","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0507 19:55:27.383397    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:27.383474    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:27.383474    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:27.383474    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:27.386956    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:55:27.386956    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:27.386956    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:27 GMT
	I0507 19:55:27.386956    5068 round_trippers.go:580]     Audit-Id: 725ace86-cb4e-45b8-a6d5-5c939aceb7e1
	I0507 19:55:27.386956    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:27.386956    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:27.386956    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:27.386956    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:27.386956    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1836","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0507 19:55:27.387488    5068 pod_ready.go:102] pod "coredns-7db6d8ff4d-5j966" in "kube-system" namespace has status "Ready":"False"
	I0507 19:55:27.874318    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5j966
	I0507 19:55:27.874386    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:27.874386    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:27.874386    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:27.880822    5068 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0507 19:55:27.880822    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:27.881018    5068 round_trippers.go:580]     Audit-Id: c1daed89-532a-47da-b183-80544bd31493
	I0507 19:55:27.881018    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:27.881055    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:27.881055    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:27.881055    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:27.881055    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:28 GMT
	I0507 19:55:27.881172    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"1756","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0507 19:55:27.881813    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:27.881813    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:27.881813    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:27.881813    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:27.885049    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:55:27.885049    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:27.885049    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:27.885049    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:27.885049    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:27.885049    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:28 GMT
	I0507 19:55:27.885049    5068 round_trippers.go:580]     Audit-Id: 0836bf61-5f18-4032-965c-fb83a0839565
	I0507 19:55:27.885366    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:27.885546    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1836","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0507 19:55:28.363390    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5j966
	I0507 19:55:28.363470    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:28.363470    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:28.363470    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:28.366682    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:55:28.366682    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:28.367272    5068 round_trippers.go:580]     Audit-Id: 540fd999-2cab-4389-bc62-91a119dbc8bf
	I0507 19:55:28.367272    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:28.367272    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:28.367272    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:28.367272    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:28.367272    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:28 GMT
	I0507 19:55:28.367524    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"1756","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0507 19:55:28.368640    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:28.368732    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:28.368732    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:28.368732    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:28.373315    5068 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:55:28.373315    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:28.373315    5068 round_trippers.go:580]     Audit-Id: 48f0269d-9bdf-4b8b-ada6-06ec49d431ab
	I0507 19:55:28.373315    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:28.373315    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:28.373315    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:28.373315    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:28.373315    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:28 GMT
	I0507 19:55:28.373858    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1836","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0507 19:55:28.865980    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5j966
	I0507 19:55:28.865980    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:28.865980    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:28.865980    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:28.870520    5068 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:55:28.870520    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:28.870520    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:28.870520    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:28.870520    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:29 GMT
	I0507 19:55:28.870520    5068 round_trippers.go:580]     Audit-Id: fc99b8dc-a4dd-4531-9abf-7daf591e749f
	I0507 19:55:28.870520    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:28.870520    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:28.871713    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"1756","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0507 19:55:28.872375    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:28.872375    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:28.872375    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:28.872375    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:28.877554    5068 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 19:55:28.877615    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:28.877615    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:28.877615    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:28.877615    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:29 GMT
	I0507 19:55:28.877615    5068 round_trippers.go:580]     Audit-Id: 87e530a7-8491-4b20-a465-de99f76030d0
	I0507 19:55:28.877615    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:28.877615    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:28.877706    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1836","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0507 19:55:29.365706    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5j966
	I0507 19:55:29.365808    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:29.365888    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:29.365888    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:29.370749    5068 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:55:29.370906    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:29.370906    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:29.370906    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:29 GMT
	I0507 19:55:29.370986    5068 round_trippers.go:580]     Audit-Id: 0c32de1b-8784-42f1-b4d9-31ef2048b0d0
	I0507 19:55:29.370986    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:29.370986    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:29.370986    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:29.371622    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"1756","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0507 19:55:29.372735    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:29.372735    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:29.372735    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:29.372819    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:29.375370    5068 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:55:29.375825    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:29.375825    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:29.375825    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:29.375825    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:29 GMT
	I0507 19:55:29.375825    5068 round_trippers.go:580]     Audit-Id: f054eafd-bb1a-4316-8cb0-c19ddcd71ca7
	I0507 19:55:29.375825    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:29.375825    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:29.376072    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1836","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0507 19:55:29.876830    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5j966
	I0507 19:55:29.876830    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:29.876830    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:29.877113    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:29.880415    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:55:29.880415    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:29.880415    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:29.880415    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:29.880415    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:29.880415    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:30 GMT
	I0507 19:55:29.880415    5068 round_trippers.go:580]     Audit-Id: 67dcdc9d-6437-45ff-99e6-80397acd4463
	I0507 19:55:29.880415    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:29.881285    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"1756","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0507 19:55:29.882292    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:29.882292    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:29.882292    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:29.882375    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:29.884629    5068 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:55:29.884629    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:29.884629    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:29.885390    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:29.885390    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:30 GMT
	I0507 19:55:29.885390    5068 round_trippers.go:580]     Audit-Id: 06f22483-3580-4967-9d0e-024bdb5d0f76
	I0507 19:55:29.885390    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:29.885390    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:29.885585    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1836","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0507 19:55:29.885626    5068 pod_ready.go:102] pod "coredns-7db6d8ff4d-5j966" in "kube-system" namespace has status "Ready":"False"
	I0507 19:55:30.378148    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5j966
	I0507 19:55:30.378618    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:30.378717    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:30.378717    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:30.382302    5068 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:55:30.382363    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:30.382363    5068 round_trippers.go:580]     Audit-Id: f6f09b30-664c-47e0-8528-d27dbd1eb268
	I0507 19:55:30.382363    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:30.382363    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:30.382363    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:30.382363    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:30.382363    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:30 GMT
	I0507 19:55:30.383007    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"1756","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0507 19:55:30.383694    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:30.383792    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:30.383792    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:30.383792    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:30.387482    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:55:30.387482    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:30.387482    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:30.387482    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:30.387482    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:30.387482    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:30 GMT
	I0507 19:55:30.387482    5068 round_trippers.go:580]     Audit-Id: 785bcc8c-5f00-4351-9f44-580183102f50
	I0507 19:55:30.387482    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:30.387482    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1836","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0507 19:55:30.878401    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5j966
	I0507 19:55:30.878401    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:30.878401    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:30.878401    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:30.884945    5068 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0507 19:55:30.885163    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:30.885163    5068 round_trippers.go:580]     Audit-Id: 5d5788f5-f57a-4fcb-8126-f2a69f248a68
	I0507 19:55:30.885163    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:30.885163    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:30.885163    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:30.885163    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:30.885163    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:31 GMT
	I0507 19:55:30.885163    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"1756","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0507 19:55:30.885163    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:30.885163    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:30.885163    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:30.885163    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:30.888201    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:55:30.888201    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:30.888201    5068 round_trippers.go:580]     Audit-Id: 9150cab1-8995-47d1-8e8b-dc43f2088c5c
	I0507 19:55:30.888201    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:30.888201    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:30.888201    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:30.888201    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:30.888201    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:31 GMT
	I0507 19:55:30.888201    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1836","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0507 19:55:31.364320    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5j966
	I0507 19:55:31.364320    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:31.364320    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:31.364320    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:31.368070    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:55:31.368070    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:31.368070    5068 round_trippers.go:580]     Audit-Id: 5cb90a20-8453-4dc6-83ff-ef18345bb7cb
	I0507 19:55:31.368070    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:31.368070    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:31.368070    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:31.368070    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:31.368070    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:31 GMT
	I0507 19:55:31.368691    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"1756","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0507 19:55:31.369115    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:31.369115    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:31.369115    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:31.369115    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:31.371705    5068 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:55:31.371705    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:31.371705    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:31.371705    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:31.371705    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:31 GMT
	I0507 19:55:31.372350    5068 round_trippers.go:580]     Audit-Id: d81ae471-b947-4aea-81ef-43d1190ffb22
	I0507 19:55:31.372350    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:31.372350    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:31.372632    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1836","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0507 19:55:31.877946    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5j966
	I0507 19:55:31.877946    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:31.877946    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:31.877946    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:31.882562    5068 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:55:31.883469    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:31.883469    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:31.883469    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:31.883469    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:32 GMT
	I0507 19:55:31.883469    5068 round_trippers.go:580]     Audit-Id: ac4796b5-ce37-4890-b779-d77b6f744afc
	I0507 19:55:31.883469    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:31.883469    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:31.883469    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"1756","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0507 19:55:31.884279    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:31.884279    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:31.884279    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:31.884279    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:31.890246    5068 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 19:55:31.890246    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:31.890246    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:31.890246    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:31.890246    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:31.890246    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:32 GMT
	I0507 19:55:31.890246    5068 round_trippers.go:580]     Audit-Id: f503f19d-5e04-45fa-90c0-bc4be3f0b9f7
	I0507 19:55:31.890246    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:31.890246    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1836","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0507 19:55:31.890989    5068 pod_ready.go:102] pod "coredns-7db6d8ff4d-5j966" in "kube-system" namespace has status "Ready":"False"
	I0507 19:55:32.377662    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5j966
	I0507 19:55:32.377662    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:32.377662    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:32.377756    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:32.380989    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:55:32.380989    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:32.380989    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:32 GMT
	I0507 19:55:32.380989    5068 round_trippers.go:580]     Audit-Id: cec7fa6c-bc29-40bc-9dad-60c0fbf9e35a
	I0507 19:55:32.380989    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:32.380989    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:32.380989    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:32.380989    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:32.382078    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"1756","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0507 19:55:32.382698    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:32.382698    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:32.382698    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:32.382698    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:32.385357    5068 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:55:32.385910    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:32.385910    5068 round_trippers.go:580]     Audit-Id: 0c734518-122d-4ed7-91b0-a819ec60d8c7
	I0507 19:55:32.385910    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:32.385910    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:32.385910    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:32.385910    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:32.385910    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:32 GMT
	I0507 19:55:32.386238    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1836","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0507 19:55:32.876791    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5j966
	I0507 19:55:32.876791    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:32.876791    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:32.876791    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:32.883574    5068 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0507 19:55:32.883574    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:32.883574    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:33 GMT
	I0507 19:55:32.883574    5068 round_trippers.go:580]     Audit-Id: 076e5530-429c-43cb-a88a-4497329ae1c3
	I0507 19:55:32.883574    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:32.883574    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:32.883574    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:32.883574    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:32.884174    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"1756","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0507 19:55:32.885097    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:32.885097    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:32.885097    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:32.885097    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:32.888687    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:55:32.888687    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:32.888687    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:32.888687    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:32.888687    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:32.888687    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:33 GMT
	I0507 19:55:32.888687    5068 round_trippers.go:580]     Audit-Id: 2fda0364-4b3b-4f12-b9d0-644c1187d06a
	I0507 19:55:32.888687    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:32.888687    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1836","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0507 19:55:33.376454    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5j966
	I0507 19:55:33.376454    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:33.376539    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:33.376539    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:33.380852    5068 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:55:33.380852    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:33.380852    5068 round_trippers.go:580]     Audit-Id: ea69d01c-f524-4281-ad66-d71a28f25aae
	I0507 19:55:33.380852    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:33.380852    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:33.380852    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:33.381126    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:33.381126    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:33 GMT
	I0507 19:55:33.381400    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"1756","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0507 19:55:33.382639    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:33.382639    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:33.382734    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:33.382734    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:33.386485    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:55:33.386485    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:33.386485    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:33.386485    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:33.386485    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:33 GMT
	I0507 19:55:33.386485    5068 round_trippers.go:580]     Audit-Id: 8e263e89-0f61-4a46-a776-2ff1f7d425a8
	I0507 19:55:33.386485    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:33.386485    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:33.387014    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1836","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0507 19:55:33.876216    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5j966
	I0507 19:55:33.876216    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:33.876216    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:33.876216    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:33.881080    5068 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:55:33.881234    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:33.881234    5068 round_trippers.go:580]     Audit-Id: 5fc7f970-e6d6-42a1-8a03-ba917145b786
	I0507 19:55:33.881234    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:33.881234    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:33.881234    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:33.881234    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:33.881234    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:34 GMT
	I0507 19:55:33.881516    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"1756","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0507 19:55:33.882572    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:33.882572    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:33.882654    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:33.882654    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:33.886901    5068 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:55:33.886901    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:33.886901    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:33.886901    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:34 GMT
	I0507 19:55:33.886901    5068 round_trippers.go:580]     Audit-Id: 409bcdf2-4739-4864-a3b9-c4be6371068f
	I0507 19:55:33.886901    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:33.886901    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:33.886901    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:33.887474    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1836","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0507 19:55:34.376079    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5j966
	I0507 19:55:34.376079    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:34.376183    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:34.376183    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:34.380985    5068 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:55:34.381292    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:34.381292    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:34.381292    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:34.381292    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:34.381292    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:34.381292    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:34 GMT
	I0507 19:55:34.381292    5068 round_trippers.go:580]     Audit-Id: d3e5d173-c39f-43fa-a658-0340a729dca1
	I0507 19:55:34.381710    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"1756","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0507 19:55:34.382794    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:34.382794    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:34.382857    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:34.382857    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:34.387376    5068 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:55:34.387376    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:34.387376    5068 round_trippers.go:580]     Audit-Id: d27ee9af-0dfe-44b3-9134-548c51a04247
	I0507 19:55:34.387376    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:34.387376    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:34.387376    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:34.387376    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:34.387376    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:34 GMT
	I0507 19:55:34.387376    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1836","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0507 19:55:34.388063    5068 pod_ready.go:102] pod "coredns-7db6d8ff4d-5j966" in "kube-system" namespace has status "Ready":"False"
	I0507 19:55:34.876745    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5j966
	I0507 19:55:34.877114    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:34.877114    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:34.877114    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:34.880521    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:55:34.880521    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:34.881545    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:35 GMT
	I0507 19:55:34.881562    5068 round_trippers.go:580]     Audit-Id: 86c0a0a3-be10-4ef4-b4f6-eeab786ed781
	I0507 19:55:34.881562    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:34.881562    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:34.881562    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:34.881562    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:34.882016    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"1756","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0507 19:55:34.882620    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:34.882620    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:34.882620    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:34.882620    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:34.885315    5068 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:55:34.886109    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:34.886109    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:35 GMT
	I0507 19:55:34.886167    5068 round_trippers.go:580]     Audit-Id: dfbd9fed-e6b7-4cd7-b64b-73d93fd7d360
	I0507 19:55:34.886167    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:34.886167    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:34.886167    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:34.886167    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:34.886167    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1836","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0507 19:55:35.374877    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5j966
	I0507 19:55:35.374877    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:35.374951    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:35.374951    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:35.378193    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:55:35.378193    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:35.378193    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:35.378193    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:35.378193    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:35 GMT
	I0507 19:55:35.378193    5068 round_trippers.go:580]     Audit-Id: 9fa30ce7-5c61-4a44-889b-84fcff84a7c6
	I0507 19:55:35.378193    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:35.378193    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:35.378965    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"1756","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0507 19:55:35.379692    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:35.379692    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:35.379692    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:35.379692    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:35.382270    5068 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:55:35.383282    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:35.383282    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:35 GMT
	I0507 19:55:35.383282    5068 round_trippers.go:580]     Audit-Id: b6d1f616-f4ca-4778-a976-c061e3a8ac09
	I0507 19:55:35.383282    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:35.383282    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:35.383282    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:35.383282    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:35.383940    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1836","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0507 19:55:35.875823    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5j966
	I0507 19:55:35.876083    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:35.876083    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:35.876083    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:35.880786    5068 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:55:35.880786    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:35.880786    5068 round_trippers.go:580]     Audit-Id: 4cdb205b-d3c8-414b-afe7-4e0e7b0f9042
	I0507 19:55:35.880786    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:35.880786    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:35.880786    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:35.880786    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:35.880786    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:36 GMT
	I0507 19:55:35.881166    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"1756","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0507 19:55:35.882115    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:35.882115    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:35.882188    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:35.882188    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:35.885336    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:55:35.885336    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:35.885336    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:35.885336    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:35.885808    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:36 GMT
	I0507 19:55:35.885808    5068 round_trippers.go:580]     Audit-Id: 79844b2e-d3f3-44dd-a4f3-f48151b805d3
	I0507 19:55:35.885808    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:35.885808    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:35.886041    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1836","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0507 19:55:36.374659    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5j966
	I0507 19:55:36.374659    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:36.374747    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:36.374747    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:36.378038    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:55:36.378038    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:36.378038    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:36.378038    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:36 GMT
	I0507 19:55:36.378038    5068 round_trippers.go:580]     Audit-Id: 40d3f72d-4405-4457-8994-b54f664af5da
	I0507 19:55:36.378038    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:36.378038    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:36.378038    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:36.378038    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"1756","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0507 19:55:36.379401    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:36.379401    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:36.379401    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:36.379401    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:36.381024    5068 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0507 19:55:36.381024    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:36.381024    5068 round_trippers.go:580]     Audit-Id: 29f7d01b-814b-473e-a08b-d4a18e6cf5f1
	I0507 19:55:36.381024    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:36.382018    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:36.382018    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:36.382018    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:36.382018    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:36 GMT
	I0507 19:55:36.382122    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1836","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0507 19:55:36.874100    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5j966
	I0507 19:55:36.874178    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:36.874178    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:36.874251    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:36.879536    5068 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 19:55:36.879536    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:36.880077    5068 round_trippers.go:580]     Audit-Id: 3863eae2-f03a-4c3c-ad3f-6e3fe8ea7986
	I0507 19:55:36.880077    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:36.880077    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:36.880077    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:36.880077    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:36.880077    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:37 GMT
	I0507 19:55:36.880260    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"1756","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0507 19:55:36.880926    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:36.880926    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:36.880926    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:36.880926    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:36.884057    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:55:36.884057    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:36.884057    5068 round_trippers.go:580]     Audit-Id: 79b73a13-a283-40bd-b570-48f50292fa8f
	I0507 19:55:36.884591    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:36.884591    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:36.884591    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:36.884591    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:36.884591    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:37 GMT
	I0507 19:55:36.884937    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1836","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0507 19:55:36.886080    5068 pod_ready.go:102] pod "coredns-7db6d8ff4d-5j966" in "kube-system" namespace has status "Ready":"False"
	I0507 19:55:37.363333    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5j966
	I0507 19:55:37.363448    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:37.363448    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:37.363448    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:37.367751    5068 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:55:37.367751    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:37.367751    5068 round_trippers.go:580]     Audit-Id: 99130986-a5bc-4a66-b15a-76df7f51b14f
	I0507 19:55:37.367751    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:37.367751    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:37.367751    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:37.367751    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:37.367751    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:37 GMT
	I0507 19:55:37.368249    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"1756","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0507 19:55:37.369235    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:37.369235    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:37.369318    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:37.369318    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:37.371952    5068 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:55:37.371952    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:37.371952    5068 round_trippers.go:580]     Audit-Id: a65d930f-23b4-44b0-b8c7-b2513d89302b
	I0507 19:55:37.371952    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:37.371952    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:37.371952    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:37.371952    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:37.371952    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:37 GMT
	I0507 19:55:37.372754    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1836","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0507 19:55:37.876585    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5j966
	I0507 19:55:37.876689    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:37.876689    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:37.876689    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:37.880127    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:55:37.880127    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:37.880127    5068 round_trippers.go:580]     Audit-Id: 69fc7208-f200-4424-a2f9-40ff9f269265
	I0507 19:55:37.880127    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:37.880127    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:37.880127    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:37.880127    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:37.880127    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:38 GMT
	I0507 19:55:37.880127    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"1756","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0507 19:55:37.881862    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:37.882417    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:37.882417    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:37.882417    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:37.887410    5068 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:55:37.887410    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:37.887410    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:37.887410    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:37.887410    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:37.887410    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:37.887410    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:38 GMT
	I0507 19:55:37.887410    5068 round_trippers.go:580]     Audit-Id: ef7b242c-9c2b-4d65-8c25-88ef55273746
	I0507 19:55:37.887410    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1836","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0507 19:55:38.368201    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5j966
	I0507 19:55:38.368291    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:38.368291    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:38.368291    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:38.378714    5068 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0507 19:55:38.378714    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:38.378714    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:38.378714    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:38.378714    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:38.378714    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:38 GMT
	I0507 19:55:38.378714    5068 round_trippers.go:580]     Audit-Id: 71ccddfd-89f0-49e6-9e99-d0d6ec724f80
	I0507 19:55:38.378714    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:38.379851    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"1756","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6841 chars]
	I0507 19:55:38.380603    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:38.380603    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:38.380603    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:38.380603    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:38.382701    5068 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:55:38.382701    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:38.382701    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:38.382701    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:38.382701    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:38.382701    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:38.382701    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:38 GMT
	I0507 19:55:38.382701    5068 round_trippers.go:580]     Audit-Id: fb24a29d-1631-48dc-a044-8f2131e2c172
	I0507 19:55:38.382701    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1836","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0507 19:55:38.877473    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5j966
	I0507 19:55:38.877473    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:38.877473    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:38.877473    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:38.881223    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:55:38.881269    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:38.881269    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:39 GMT
	I0507 19:55:38.881269    5068 round_trippers.go:580]     Audit-Id: 94fd9f6e-aa47-4830-97de-d4db98c1aed6
	I0507 19:55:38.881269    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:38.881269    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:38.881269    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:38.881269    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:38.881269    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"1873","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6788 chars]
	I0507 19:55:38.882282    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:38.882354    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:38.882354    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:38.882354    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:38.885042    5068 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:55:38.885042    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:38.885042    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:39 GMT
	I0507 19:55:38.885042    5068 round_trippers.go:580]     Audit-Id: 3d32af37-90b3-4b8c-902d-69d09f8e02de
	I0507 19:55:38.885042    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:38.885042    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:38.885042    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:38.885042    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:38.885737    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1836","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0507 19:55:38.886154    5068 pod_ready.go:92] pod "coredns-7db6d8ff4d-5j966" in "kube-system" namespace has status "Ready":"True"
	I0507 19:55:38.886269    5068 pod_ready.go:81] duration metric: took 29.5233609s for pod "coredns-7db6d8ff4d-5j966" in "kube-system" namespace to be "Ready" ...
	I0507 19:55:38.886269    5068 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-600000" in "kube-system" namespace to be "Ready" ...
	I0507 19:55:38.886269    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-600000
	I0507 19:55:38.886386    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:38.886386    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:38.886386    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:38.889185    5068 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:55:38.889185    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:38.889185    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:38.889185    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:39 GMT
	I0507 19:55:38.889185    5068 round_trippers.go:580]     Audit-Id: a7006157-5c51-4804-b556-ba079947bb08
	I0507 19:55:38.889185    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:38.889185    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:38.889185    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:38.889816    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-600000","namespace":"kube-system","uid":"de6e93ee-7fd0-45cd-82eb-44edd4a2c2e3","resourceVersion":"1798","creationTimestamp":"2024-05-07T19:54:33Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.135.22:2379","kubernetes.io/config.hash":"1581bf6b00d338797c8fb8b10b74abde","kubernetes.io/config.mirror":"1581bf6b00d338797c8fb8b10b74abde","kubernetes.io/config.seen":"2024-05-07T19:54:28.831640546Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:54:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6160 chars]
	I0507 19:55:38.890275    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:38.890275    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:38.890275    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:38.890275    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:38.892442    5068 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:55:38.893020    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:38.893020    5068 round_trippers.go:580]     Audit-Id: de268a7b-1255-4ff3-b05e-796ebab3a3d6
	I0507 19:55:38.893020    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:38.893020    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:38.893020    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:38.893020    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:38.893020    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:39 GMT
	I0507 19:55:38.893129    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1836","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0507 19:55:38.893514    5068 pod_ready.go:92] pod "etcd-multinode-600000" in "kube-system" namespace has status "Ready":"True"
	I0507 19:55:38.893514    5068 pod_ready.go:81] duration metric: took 7.2452ms for pod "etcd-multinode-600000" in "kube-system" namespace to be "Ready" ...
	I0507 19:55:38.893611    5068 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-600000" in "kube-system" namespace to be "Ready" ...
	I0507 19:55:38.893672    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-600000
	I0507 19:55:38.893672    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:38.893672    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:38.893672    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:38.899948    5068 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0507 19:55:38.899948    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:38.899948    5068 round_trippers.go:580]     Audit-Id: a75a81a5-3304-4068-b4ee-c568419a6f88
	I0507 19:55:38.899948    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:38.899948    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:38.899948    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:38.899948    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:38.899948    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:39 GMT
	I0507 19:55:38.900266    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-600000","namespace":"kube-system","uid":"4d9ace3f-e061-42ab-bb1d-3dac545f96a9","resourceVersion":"1795","creationTimestamp":"2024-05-07T19:54:35Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.135.22:8443","kubernetes.io/config.hash":"cd9cba8f94818776ec6d8836322192b3","kubernetes.io/config.mirror":"cd9cba8f94818776ec6d8836322192b3","kubernetes.io/config.seen":"2024-05-07T19:54:28.735132188Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:54:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7695 chars]
	I0507 19:55:38.900811    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:38.900811    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:38.900811    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:38.900811    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:38.903762    5068 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:55:38.903762    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:38.903762    5068 round_trippers.go:580]     Audit-Id: c3015950-6041-4cac-a269-bbec1e0d8a3e
	I0507 19:55:38.903762    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:38.903762    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:38.903762    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:38.903762    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:38.903762    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:39 GMT
	I0507 19:55:38.904035    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1836","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0507 19:55:38.904410    5068 pod_ready.go:92] pod "kube-apiserver-multinode-600000" in "kube-system" namespace has status "Ready":"True"
	I0507 19:55:38.904410    5068 pod_ready.go:81] duration metric: took 10.7982ms for pod "kube-apiserver-multinode-600000" in "kube-system" namespace to be "Ready" ...
	I0507 19:55:38.904410    5068 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-600000" in "kube-system" namespace to be "Ready" ...
	I0507 19:55:38.904528    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-600000
	I0507 19:55:38.904528    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:38.904528    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:38.904589    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:38.906949    5068 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:55:38.907028    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:38.907028    5068 round_trippers.go:580]     Audit-Id: a7428cab-a02b-41f9-8782-99053860c782
	I0507 19:55:38.907080    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:38.907080    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:38.907080    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:38.907080    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:38.907080    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:39 GMT
	I0507 19:55:38.907764    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-600000","namespace":"kube-system","uid":"b960b526-da40-480d-9a72-9ab8c7f2989a","resourceVersion":"1797","creationTimestamp":"2024-05-07T19:33:43Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f5d6aa60dc93b5e562f37ed2236c3022","kubernetes.io/config.mirror":"f5d6aa60dc93b5e562f37ed2236c3022","kubernetes.io/config.seen":"2024-05-07T19:33:37.010155750Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7470 chars]
	I0507 19:55:38.907937    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:38.907937    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:38.907937    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:38.907937    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:38.910502    5068 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:55:38.910502    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:38.910502    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:38.910502    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:38.910502    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:39 GMT
	I0507 19:55:38.910502    5068 round_trippers.go:580]     Audit-Id: 98228172-8c93-4a55-990d-7591ffe560c5
	I0507 19:55:38.910502    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:38.910502    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:38.911108    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1836","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0507 19:55:38.911471    5068 pod_ready.go:92] pod "kube-controller-manager-multinode-600000" in "kube-system" namespace has status "Ready":"True"
	I0507 19:55:38.911537    5068 pod_ready.go:81] duration metric: took 7.1266ms for pod "kube-controller-manager-multinode-600000" in "kube-system" namespace to be "Ready" ...
	I0507 19:55:38.911537    5068 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9fb6t" in "kube-system" namespace to be "Ready" ...
	I0507 19:55:38.911647    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9fb6t
	I0507 19:55:38.911647    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:38.911647    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:38.911647    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:38.914472    5068 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:55:38.914472    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:38.914573    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:38.914573    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:39 GMT
	I0507 19:55:38.914573    5068 round_trippers.go:580]     Audit-Id: af90375c-ac42-455d-919e-62a77b05ddd4
	I0507 19:55:38.914573    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:38.914573    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:38.914635    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:38.914869    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-9fb6t","generateName":"kube-proxy-","namespace":"kube-system","uid":"f91cc93c-cb87-4494-9e11-b3bf74b9311d","resourceVersion":"1858","creationTimestamp":"2024-05-07T19:36:39Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"952e0024-0710-460c-920c-3959ceadbd10","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:36:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"952e0024-0710-460c-920c-3959ceadbd10\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6067 chars]
	I0507 19:55:38.915879    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000-m02
	I0507 19:55:38.915972    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:38.915972    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:38.916037    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:38.922114    5068 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0507 19:55:38.922114    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:38.922114    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:38.922114    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:38.922114    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:38.922114    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:38.922114    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:39 GMT
	I0507 19:55:38.922114    5068 round_trippers.go:580]     Audit-Id: 0b15213b-07ac-4621-b582-75697f90d892
	I0507 19:55:38.922114    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000-m02","uid":"4aaf533a-c21c-427b-b48f-82fef83a8fb3","resourceVersion":"1864","creationTimestamp":"2024-05-07T19:36:39Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_07T19_36_40_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:36:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4583 chars]
	I0507 19:55:38.922819    5068 pod_ready.go:97] node "multinode-600000-m02" hosting pod "kube-proxy-9fb6t" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-600000-m02" has status "Ready":"Unknown"
	I0507 19:55:38.922819    5068 pod_ready.go:81] duration metric: took 11.2812ms for pod "kube-proxy-9fb6t" in "kube-system" namespace to be "Ready" ...
	E0507 19:55:38.922819    5068 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-600000-m02" hosting pod "kube-proxy-9fb6t" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-600000-m02" has status "Ready":"Unknown"
	I0507 19:55:38.922819    5068 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-c9gw5" in "kube-system" namespace to be "Ready" ...
	I0507 19:55:39.082308    5068 request.go:629] Waited for 159.4788ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c9gw5
	I0507 19:55:39.082945    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c9gw5
	I0507 19:55:39.082945    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:39.082945    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:39.082945    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:39.088410    5068 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 19:55:39.088410    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:39.088410    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:39.088410    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:39.088410    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:39.088410    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:39 GMT
	I0507 19:55:39.088410    5068 round_trippers.go:580]     Audit-Id: 6471263c-dca1-4fe2-a086-42b113201a03
	I0507 19:55:39.088410    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:39.089257    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-c9gw5","generateName":"kube-proxy-","namespace":"kube-system","uid":"9a39807c-6243-4aa2-86f4-8626031c80a6","resourceVersion":"1759","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"952e0024-0710-460c-920c-3959ceadbd10","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"952e0024-0710-460c-920c-3959ceadbd10\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6034 chars]
	I0507 19:55:39.286319    5068 request.go:629] Waited for 196.1191ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:39.286319    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:39.286319    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:39.286319    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:39.286319    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:39.292521    5068 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0507 19:55:39.292697    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:39.292736    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:39.292736    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:39 GMT
	I0507 19:55:39.292736    5068 round_trippers.go:580]     Audit-Id: 448ded82-73cb-4141-97a1-bd711581c074
	I0507 19:55:39.292736    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:39.292736    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:39.292736    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:39.292846    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1836","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0507 19:55:39.293531    5068 pod_ready.go:92] pod "kube-proxy-c9gw5" in "kube-system" namespace has status "Ready":"True"
	I0507 19:55:39.293531    5068 pod_ready.go:81] duration metric: took 370.6887ms for pod "kube-proxy-c9gw5" in "kube-system" namespace to be "Ready" ...
	I0507 19:55:39.293531    5068 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pzn8q" in "kube-system" namespace to be "Ready" ...
	I0507 19:55:39.488176    5068 request.go:629] Waited for 194.0988ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pzn8q
	I0507 19:55:39.488416    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pzn8q
	I0507 19:55:39.488504    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:39.488504    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:39.488504    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:39.491680    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:55:39.491680    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:39.491680    5068 round_trippers.go:580]     Audit-Id: 51b2c41b-d4d9-475c-a599-149339ebe482
	I0507 19:55:39.491680    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:39.491680    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:39.491680    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:39.491680    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:39.491680    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:39 GMT
	I0507 19:55:39.492210    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-pzn8q","generateName":"kube-proxy-","namespace":"kube-system","uid":"f2506861-1f09-4193-b751-22a685a0b71b","resourceVersion":"1643","creationTimestamp":"2024-05-07T19:40:53Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"952e0024-0710-460c-920c-3959ceadbd10","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:40:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"952e0024-0710-460c-920c-3959ceadbd10\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6057 chars]
	I0507 19:55:39.692489    5068 request.go:629] Waited for 199.4995ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.135.22:8443/api/v1/nodes/multinode-600000-m03
	I0507 19:55:39.692629    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000-m03
	I0507 19:55:39.692629    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:39.692629    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:39.692629    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:39.695030    5068 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:55:39.696011    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:39.696011    5068 round_trippers.go:580]     Audit-Id: 2e5bde43-9869-4041-b2ed-34a15144dafb
	I0507 19:55:39.696011    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:39.696011    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:39.696011    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:39.696011    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:39.696011    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:39 GMT
	I0507 19:55:39.696520    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000-m03","uid":"ec7533ad-814b-49fe-bc8d-a070f7fb171f","resourceVersion":"1814","creationTimestamp":"2024-05-07T19:50:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_07T19_50_26_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:50:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4398 chars]
	I0507 19:55:39.697056    5068 pod_ready.go:97] node "multinode-600000-m03" hosting pod "kube-proxy-pzn8q" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-600000-m03" has status "Ready":"Unknown"
	I0507 19:55:39.697056    5068 pod_ready.go:81] duration metric: took 403.4983ms for pod "kube-proxy-pzn8q" in "kube-system" namespace to be "Ready" ...
	E0507 19:55:39.697056    5068 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-600000-m03" hosting pod "kube-proxy-pzn8q" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-600000-m03" has status "Ready":"Unknown"
	I0507 19:55:39.697134    5068 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-600000" in "kube-system" namespace to be "Ready" ...
	I0507 19:55:39.880250    5068 request.go:629] Waited for 182.8701ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-600000
	I0507 19:55:39.880250    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-600000
	I0507 19:55:39.880250    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:39.880250    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:39.880250    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:39.884853    5068 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:55:39.884853    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:39.884853    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:39.884853    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:40 GMT
	I0507 19:55:39.884853    5068 round_trippers.go:580]     Audit-Id: b06adb08-f325-43de-8985-a868e2a7c969
	I0507 19:55:39.884853    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:39.884853    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:39.884853    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:39.885740    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-600000","namespace":"kube-system","uid":"ec3ac949-cb83-49be-a908-c93e23135ae8","resourceVersion":"1777","creationTimestamp":"2024-05-07T19:33:44Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7c4ee79f6d4f6adb00b636f817445fef","kubernetes.io/config.mirror":"7c4ee79f6d4f6adb00b636f817445fef","kubernetes.io/config.seen":"2024-05-07T19:33:44.165677427Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5200 chars]
	I0507 19:55:40.081781    5068 request.go:629] Waited for 195.1597ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:40.082156    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:55:40.082204    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:40.082204    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:40.082204    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:40.086273    5068 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:55:40.086273    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:40.086273    5068 round_trippers.go:580]     Audit-Id: 3324e896-0052-463f-a13d-ff409c861044
	I0507 19:55:40.086273    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:40.086273    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:40.086273    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:40.086273    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:40.086273    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:40 GMT
	I0507 19:55:40.086273    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1836","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0507 19:55:40.087310    5068 pod_ready.go:92] pod "kube-scheduler-multinode-600000" in "kube-system" namespace has status "Ready":"True"
	I0507 19:55:40.087377    5068 pod_ready.go:81] duration metric: took 390.2181ms for pod "kube-scheduler-multinode-600000" in "kube-system" namespace to be "Ready" ...
	I0507 19:55:40.087377    5068 pod_ready.go:38] duration metric: took 30.7347951s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0507 19:55:40.087446    5068 api_server.go:52] waiting for apiserver process to appear ...
	I0507 19:55:40.094196    5068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 19:55:40.116731    5068 command_runner.go:130] > 7c95e3addc4b
	I0507 19:55:40.117791    5068 logs.go:276] 1 containers: [7c95e3addc4b]
	I0507 19:55:40.123961    5068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 19:55:40.142617    5068 command_runner.go:130] > ac320a872e77
	I0507 19:55:40.143358    5068 logs.go:276] 1 containers: [ac320a872e77]
	I0507 19:55:40.151044    5068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 19:55:40.172655    5068 command_runner.go:130] > d27627c19808
	I0507 19:55:40.172655    5068 command_runner.go:130] > 9550b237d8d7
	I0507 19:55:40.172789    5068 logs.go:276] 2 containers: [d27627c19808 9550b237d8d7]
	I0507 19:55:40.182639    5068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 19:55:40.200226    5068 command_runner.go:130] > 45341720d5be
	I0507 19:55:40.200226    5068 command_runner.go:130] > 7cefdac2050f
	I0507 19:55:40.201745    5068 logs.go:276] 2 containers: [45341720d5be 7cefdac2050f]
	I0507 19:55:40.210679    5068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 19:55:40.230156    5068 command_runner.go:130] > 5255a972ff6c
	I0507 19:55:40.231156    5068 command_runner.go:130] > aa9692c1fbd3
	I0507 19:55:40.231156    5068 logs.go:276] 2 containers: [5255a972ff6c aa9692c1fbd3]
	I0507 19:55:40.237394    5068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 19:55:40.265983    5068 command_runner.go:130] > 922d1e2b8745
	I0507 19:55:40.265983    5068 command_runner.go:130] > 3067f16e2e38
	I0507 19:55:40.266917    5068 logs.go:276] 2 containers: [922d1e2b8745 3067f16e2e38]
	I0507 19:55:40.273673    5068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 19:55:40.291810    5068 command_runner.go:130] > 29b5cae0b8f1
	I0507 19:55:40.291810    5068 command_runner.go:130] > 2d49ad078ed3
	I0507 19:55:40.293367    5068 logs.go:276] 2 containers: [29b5cae0b8f1 2d49ad078ed3]
	I0507 19:55:40.293367    5068 logs.go:123] Gathering logs for dmesg ...
	I0507 19:55:40.293367    5068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 19:55:40.312312    5068 command_runner.go:130] > [May 7 19:52] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0507 19:55:40.312312    5068 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0507 19:55:40.312312    5068 command_runner.go:130] > [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0507 19:55:40.312312    5068 command_runner.go:130] > [  +0.116232] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0507 19:55:40.312838    5068 command_runner.go:130] > [  +0.022195] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0507 19:55:40.312901    5068 command_runner.go:130] > [  +0.000003] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0507 19:55:40.312901    5068 command_runner.go:130] > [  +0.000001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0507 19:55:40.312901    5068 command_runner.go:130] > [  +0.059863] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0507 19:55:40.312901    5068 command_runner.go:130] > [  +0.024233] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0507 19:55:40.312901    5068 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0507 19:55:40.312901    5068 command_runner.go:130] > [May 7 19:53] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0507 19:55:40.312901    5068 command_runner.go:130] > [  +1.293154] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0507 19:55:40.312901    5068 command_runner.go:130] > [  +1.138766] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I0507 19:55:40.312901    5068 command_runner.go:130] > [  +7.459478] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0507 19:55:40.312901    5068 command_runner.go:130] > [  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0507 19:55:40.312901    5068 command_runner.go:130] > [  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	I0507 19:55:40.312901    5068 command_runner.go:130] > [ +43.605395] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	I0507 19:55:40.312901    5068 command_runner.go:130] > [  +0.173535] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	I0507 19:55:40.312901    5068 command_runner.go:130] > [May 7 19:54] systemd-fstab-generator[975]: Ignoring "noauto" option for root device
	I0507 19:55:40.312901    5068 command_runner.go:130] > [  +0.087049] kauditd_printk_skb: 73 callbacks suppressed
	I0507 19:55:40.312901    5068 command_runner.go:130] > [  +0.469142] systemd-fstab-generator[1013]: Ignoring "noauto" option for root device
	I0507 19:55:40.312901    5068 command_runner.go:130] > [  +0.182768] systemd-fstab-generator[1025]: Ignoring "noauto" option for root device
	I0507 19:55:40.312901    5068 command_runner.go:130] > [  +0.198440] systemd-fstab-generator[1039]: Ignoring "noauto" option for root device
	I0507 19:55:40.312901    5068 command_runner.go:130] > [  +2.865339] systemd-fstab-generator[1227]: Ignoring "noauto" option for root device
	I0507 19:55:40.312901    5068 command_runner.go:130] > [  +0.189423] systemd-fstab-generator[1239]: Ignoring "noauto" option for root device
	I0507 19:55:40.312901    5068 command_runner.go:130] > [  +0.164316] systemd-fstab-generator[1251]: Ignoring "noauto" option for root device
	I0507 19:55:40.312901    5068 command_runner.go:130] > [  +0.220106] systemd-fstab-generator[1266]: Ignoring "noauto" option for root device
	I0507 19:55:40.313430    5068 command_runner.go:130] > [  +0.801286] systemd-fstab-generator[1378]: Ignoring "noauto" option for root device
	I0507 19:55:40.313430    5068 command_runner.go:130] > [  +0.081896] kauditd_printk_skb: 205 callbacks suppressed
	I0507 19:55:40.313430    5068 command_runner.go:130] > [  +3.512673] systemd-fstab-generator[1519]: Ignoring "noauto" option for root device
	I0507 19:55:40.313430    5068 command_runner.go:130] > [  +1.511112] kauditd_printk_skb: 64 callbacks suppressed
	I0507 19:55:40.313430    5068 command_runner.go:130] > [  +5.012853] kauditd_printk_skb: 25 callbacks suppressed
	I0507 19:55:40.313430    5068 command_runner.go:130] > [  +3.386216] systemd-fstab-generator[2338]: Ignoring "noauto" option for root device
	I0507 19:55:40.313543    5068 command_runner.go:130] > [  +7.924740] kauditd_printk_skb: 55 callbacks suppressed
	I0507 19:55:40.316193    5068 logs.go:123] Gathering logs for kube-apiserver [7c95e3addc4b] ...
	I0507 19:55:40.316193    5068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c95e3addc4b"
	I0507 19:55:40.347369    5068 command_runner.go:130] ! I0507 19:54:30.988770       1 options.go:221] external host was not specified, using 172.19.135.22
	I0507 19:55:40.347418    5068 command_runner.go:130] ! I0507 19:54:30.995893       1 server.go:148] Version: v1.30.0
	I0507 19:55:40.347465    5068 command_runner.go:130] ! I0507 19:54:30.996132       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0507 19:55:40.347515    5068 command_runner.go:130] ! I0507 19:54:31.800337       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0507 19:55:40.347562    5068 command_runner.go:130] ! I0507 19:54:31.800374       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0507 19:55:40.347630    5068 command_runner.go:130] ! I0507 19:54:31.801064       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0507 19:55:40.347685    5068 command_runner.go:130] ! I0507 19:54:31.801131       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0507 19:55:40.347734    5068 command_runner.go:130] ! I0507 19:54:31.801553       1 instance.go:299] Using reconciler: lease
	I0507 19:55:40.347837    5068 command_runner.go:130] ! I0507 19:54:32.352039       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0507 19:55:40.347885    5068 command_runner.go:130] ! W0507 19:54:32.352075       1 genericapiserver.go:733] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0507 19:55:40.347926    5068 command_runner.go:130] ! I0507 19:54:32.609708       1 handler.go:286] Adding GroupVersion  v1 to ResourceManager
	I0507 19:55:40.347975    5068 command_runner.go:130] ! I0507 19:54:32.610006       1 instance.go:696] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0507 19:55:40.347975    5068 command_runner.go:130] ! I0507 19:54:32.836522       1 instance.go:696] API group "storagemigration.k8s.io" is not enabled, skipping.
	I0507 19:55:40.348080    5068 command_runner.go:130] ! I0507 19:54:32.999148       1 instance.go:696] API group "resource.k8s.io" is not enabled, skipping.
	I0507 19:55:40.348080    5068 command_runner.go:130] ! I0507 19:54:33.030018       1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0507 19:55:40.348134    5068 command_runner.go:130] ! W0507 19:54:33.030136       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0507 19:55:40.348196    5068 command_runner.go:130] ! W0507 19:54:33.030146       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0507 19:55:40.348250    5068 command_runner.go:130] ! I0507 19:54:33.030562       1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0507 19:55:40.348303    5068 command_runner.go:130] ! W0507 19:54:33.030671       1 genericapiserver.go:733] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0507 19:55:40.348357    5068 command_runner.go:130] ! I0507 19:54:33.031835       1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
	I0507 19:55:40.348406    5068 command_runner.go:130] ! I0507 19:54:33.032596       1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
	I0507 19:55:40.348508    5068 command_runner.go:130] ! W0507 19:54:33.032785       1 genericapiserver.go:733] Skipping API autoscaling/v2beta1 because it has no resources.
	I0507 19:55:40.348508    5068 command_runner.go:130] ! W0507 19:54:33.032807       1 genericapiserver.go:733] Skipping API autoscaling/v2beta2 because it has no resources.
	I0507 19:55:40.348614    5068 command_runner.go:130] ! I0507 19:54:33.034337       1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
	I0507 19:55:40.348668    5068 command_runner.go:130] ! W0507 19:54:33.034455       1 genericapiserver.go:733] Skipping API batch/v1beta1 because it has no resources.
	I0507 19:55:40.348717    5068 command_runner.go:130] ! I0507 19:54:33.035255       1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0507 19:55:40.348717    5068 command_runner.go:130] ! W0507 19:54:33.035288       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0507 19:55:40.348818    5068 command_runner.go:130] ! W0507 19:54:33.035294       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0507 19:55:40.348818    5068 command_runner.go:130] ! I0507 19:54:33.035838       1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0507 19:55:40.348920    5068 command_runner.go:130] ! W0507 19:54:33.035918       1 genericapiserver.go:733] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0507 19:55:40.348920    5068 command_runner.go:130] ! W0507 19:54:33.035968       1 genericapiserver.go:733] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0507 19:55:40.349023    5068 command_runner.go:130] ! I0507 19:54:33.036453       1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0507 19:55:40.349023    5068 command_runner.go:130] ! I0507 19:54:33.038094       1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0507 19:55:40.349127    5068 command_runner.go:130] ! W0507 19:54:33.038196       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0507 19:55:40.349182    5068 command_runner.go:130] ! W0507 19:54:33.038204       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0507 19:55:40.349231    5068 command_runner.go:130] ! I0507 19:54:33.038675       1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0507 19:55:40.349231    5068 command_runner.go:130] ! W0507 19:54:33.038880       1 genericapiserver.go:733] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0507 19:55:40.349285    5068 command_runner.go:130] ! W0507 19:54:33.038891       1 genericapiserver.go:733] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0507 19:55:40.349334    5068 command_runner.go:130] ! I0507 19:54:33.039628       1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
	I0507 19:55:40.349436    5068 command_runner.go:130] ! W0507 19:54:33.039798       1 genericapiserver.go:733] Skipping API policy/v1beta1 because it has no resources.
	I0507 19:55:40.349490    5068 command_runner.go:130] ! I0507 19:54:33.041524       1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0507 19:55:40.349538    5068 command_runner.go:130] ! W0507 19:54:33.041621       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0507 19:55:40.349591    5068 command_runner.go:130] ! W0507 19:54:33.041630       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0507 19:55:40.349642    5068 command_runner.go:130] ! I0507 19:54:33.042180       1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0507 19:55:40.349642    5068 command_runner.go:130] ! W0507 19:54:33.042199       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0507 19:55:40.349745    5068 command_runner.go:130] ! W0507 19:54:33.042204       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0507 19:55:40.349799    5068 command_runner.go:130] ! I0507 19:54:33.044893       1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0507 19:55:40.349847    5068 command_runner.go:130] ! W0507 19:54:33.045016       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0507 19:55:40.349847    5068 command_runner.go:130] ! W0507 19:54:33.045025       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0507 19:55:40.349901    5068 command_runner.go:130] ! I0507 19:54:33.046333       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I0507 19:55:40.349960    5068 command_runner.go:130] ! I0507 19:54:33.047629       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I0507 19:55:40.350015    5068 command_runner.go:130] ! W0507 19:54:33.047767       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I0507 19:55:40.350064    5068 command_runner.go:130] ! W0507 19:54:33.047776       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0507 19:55:40.350118    5068 command_runner.go:130] ! I0507 19:54:33.052196       1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
	I0507 19:55:40.350167    5068 command_runner.go:130] ! W0507 19:54:33.052296       1 genericapiserver.go:733] Skipping API apps/v1beta2 because it has no resources.
	I0507 19:55:40.350222    5068 command_runner.go:130] ! W0507 19:54:33.052305       1 genericapiserver.go:733] Skipping API apps/v1beta1 because it has no resources.
	I0507 19:55:40.350222    5068 command_runner.go:130] ! I0507 19:54:33.054428       1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0507 19:55:40.350326    5068 command_runner.go:130] ! W0507 19:54:33.054530       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0507 19:55:40.350375    5068 command_runner.go:130] ! W0507 19:54:33.054538       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0507 19:55:40.350375    5068 command_runner.go:130] ! I0507 19:54:33.055154       1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0507 19:55:40.350429    5068 command_runner.go:130] ! W0507 19:54:33.055244       1 genericapiserver.go:733] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0507 19:55:40.350479    5068 command_runner.go:130] ! I0507 19:54:33.069859       1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0507 19:55:40.350582    5068 command_runner.go:130] ! W0507 19:54:33.070043       1 genericapiserver.go:733] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0507 19:55:40.350582    5068 command_runner.go:130] ! I0507 19:54:33.594507       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0507 19:55:40.350843    5068 command_runner.go:130] ! I0507 19:54:33.594682       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0507 19:55:40.350843    5068 command_runner.go:130] ! I0507 19:54:33.595540       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0507 19:55:40.350951    5068 command_runner.go:130] ! I0507 19:54:33.595924       1 secure_serving.go:213] Serving securely on [::]:8443
	I0507 19:55:40.350951    5068 command_runner.go:130] ! I0507 19:54:33.596143       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0507 19:55:40.351012    5068 command_runner.go:130] ! I0507 19:54:33.596346       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0507 19:55:40.351068    5068 command_runner.go:130] ! I0507 19:54:33.596374       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0507 19:55:40.351068    5068 command_runner.go:130] ! I0507 19:54:33.598256       1 available_controller.go:423] Starting AvailableConditionController
	I0507 19:55:40.351154    5068 command_runner.go:130] ! I0507 19:54:33.598413       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0507 19:55:40.351154    5068 command_runner.go:130] ! I0507 19:54:33.598667       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0507 19:55:40.351154    5068 command_runner.go:130] ! I0507 19:54:33.598950       1 controller.go:116] Starting legacy_token_tracking_controller
	I0507 19:55:40.351236    5068 command_runner.go:130] ! I0507 19:54:33.599041       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0507 19:55:40.351339    5068 command_runner.go:130] ! I0507 19:54:33.599147       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0507 19:55:40.351385    5068 command_runner.go:130] ! I0507 19:54:33.599437       1 apf_controller.go:374] Starting API Priority and Fairness config controller
	I0507 19:55:40.351438    5068 command_runner.go:130] ! I0507 19:54:33.600282       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0507 19:55:40.351438    5068 command_runner.go:130] ! I0507 19:54:33.600293       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0507 19:55:40.351494    5068 command_runner.go:130] ! I0507 19:54:33.600310       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0507 19:55:40.351494    5068 command_runner.go:130] ! I0507 19:54:33.600988       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0507 19:55:40.351553    5068 command_runner.go:130] ! I0507 19:54:33.601389       1 aggregator.go:163] waiting for initial CRD sync...
	I0507 19:55:40.351608    5068 command_runner.go:130] ! I0507 19:54:33.601406       1 controller.go:78] Starting OpenAPI AggregationController
	I0507 19:55:40.351608    5068 command_runner.go:130] ! I0507 19:54:33.601452       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0507 19:55:40.351658    5068 command_runner.go:130] ! I0507 19:54:33.601517       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0507 19:55:40.351711    5068 command_runner.go:130] ! I0507 19:54:33.603473       1 controller.go:139] Starting OpenAPI controller
	I0507 19:55:40.351761    5068 command_runner.go:130] ! I0507 19:54:33.603607       1 controller.go:87] Starting OpenAPI V3 controller
	I0507 19:55:40.351816    5068 command_runner.go:130] ! I0507 19:54:33.603676       1 naming_controller.go:291] Starting NamingConditionController
	I0507 19:55:40.351816    5068 command_runner.go:130] ! I0507 19:54:33.603772       1 establishing_controller.go:76] Starting EstablishingController
	I0507 19:55:40.351864    5068 command_runner.go:130] ! I0507 19:54:33.603950       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0507 19:55:40.351918    5068 command_runner.go:130] ! I0507 19:54:33.606447       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0507 19:55:40.351918    5068 command_runner.go:130] ! I0507 19:54:33.606495       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0507 19:55:40.351918    5068 command_runner.go:130] ! I0507 19:54:33.617581       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0507 19:55:40.351918    5068 command_runner.go:130] ! I0507 19:54:33.640887       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0507 19:55:40.351918    5068 command_runner.go:130] ! I0507 19:54:33.641139       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0507 19:55:40.351918    5068 command_runner.go:130] ! I0507 19:54:33.700222       1 shared_informer.go:320] Caches are synced for configmaps
	I0507 19:55:40.351918    5068 command_runner.go:130] ! I0507 19:54:33.702782       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0507 19:55:40.351918    5068 command_runner.go:130] ! I0507 19:54:33.702797       1 policy_source.go:224] refreshing policies
	I0507 19:55:40.351918    5068 command_runner.go:130] ! I0507 19:54:33.720688       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0507 19:55:40.351918    5068 command_runner.go:130] ! I0507 19:54:33.721334       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0507 19:55:40.351918    5068 command_runner.go:130] ! I0507 19:54:33.739066       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0507 19:55:40.351918    5068 command_runner.go:130] ! I0507 19:54:33.741686       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0507 19:55:40.351918    5068 command_runner.go:130] ! I0507 19:54:33.742272       1 aggregator.go:165] initial CRD sync complete...
	I0507 19:55:40.351918    5068 command_runner.go:130] ! I0507 19:54:33.742439       1 autoregister_controller.go:141] Starting autoregister controller
	I0507 19:55:40.351918    5068 command_runner.go:130] ! I0507 19:54:33.742581       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0507 19:55:40.351918    5068 command_runner.go:130] ! I0507 19:54:33.742709       1 cache.go:39] Caches are synced for autoregister controller
	I0507 19:55:40.351918    5068 command_runner.go:130] ! I0507 19:54:33.796399       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0507 19:55:40.351918    5068 command_runner.go:130] ! I0507 19:54:33.800122       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0507 19:55:40.351918    5068 command_runner.go:130] ! I0507 19:54:33.800332       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0507 19:55:40.352457    5068 command_runner.go:130] ! I0507 19:54:33.800503       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0507 19:55:40.352511    5068 command_runner.go:130] ! I0507 19:54:33.825705       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0507 19:55:40.352561    5068 command_runner.go:130] ! I0507 19:54:34.607945       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0507 19:55:40.352561    5068 command_runner.go:130] ! W0507 19:54:35.478370       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.19.135.22]
	I0507 19:55:40.352664    5068 command_runner.go:130] ! I0507 19:54:35.480604       1 controller.go:615] quota admission added evaluator for: endpoints
	I0507 19:55:40.352717    5068 command_runner.go:130] ! I0507 19:54:35.493313       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0507 19:55:40.352767    5068 command_runner.go:130] ! I0507 19:54:36.265995       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0507 19:55:40.352767    5068 command_runner.go:130] ! I0507 19:54:36.444774       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0507 19:55:40.352821    5068 command_runner.go:130] ! I0507 19:54:36.460585       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0507 19:55:40.352871    5068 command_runner.go:130] ! I0507 19:54:36.562263       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0507 19:55:40.352971    5068 command_runner.go:130] ! I0507 19:54:36.572917       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0507 19:55:40.360289    5068 logs.go:123] Gathering logs for etcd [ac320a872e77] ...
	I0507 19:55:40.360289    5068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac320a872e77"
	I0507 19:55:40.385610    5068 command_runner.go:130] ! {"level":"warn","ts":"2024-05-07T19:54:30.550295Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0507 19:55:40.385610    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.55691Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.19.135.22:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.19.135.22:2380","--initial-cluster=multinode-600000=https://172.19.135.22:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.19.135.22:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.19.135.22:2380","--name=multinode-600000","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy
-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0507 19:55:40.385610    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.557392Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0507 19:55:40.385610    5068 command_runner.go:130] ! {"level":"warn","ts":"2024-05-07T19:54:30.557435Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0507 19:55:40.385610    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.557445Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.19.135.22:2380"]}
	I0507 19:55:40.385610    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.557477Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0507 19:55:40.385610    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.567644Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.19.135.22:2379"]}
	I0507 19:55:40.386147    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.569078Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-600000","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.19.135.22:2380"],"listen-peer-urls":["https://172.19.135.22:2380"],"advertise-client-urls":["https://172.19.135.22:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.19.135.22:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initia
l-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0507 19:55:40.386147    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.589786Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"17.628697ms"}
	I0507 19:55:40.386147    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.62481Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0507 19:55:40.386296    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.649734Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"9263975694bef132","local-member-id":"aac5eb588ad33a11","commit-index":1911}
	I0507 19:55:40.386296    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.650002Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aac5eb588ad33a11 switched to configuration voters=()"}
	I0507 19:55:40.386296    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.650099Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aac5eb588ad33a11 became follower at term 2"}
	I0507 19:55:40.386296    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.650259Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft aac5eb588ad33a11 [peers: [], term: 2, commit: 1911, applied: 0, lastindex: 1911, lastterm: 2]"}
	I0507 19:55:40.386461    5068 command_runner.go:130] ! {"level":"warn","ts":"2024-05-07T19:54:30.665767Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I0507 19:55:40.386461    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.674281Z","caller":"mvcc/kvstore.go:341","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1115}
	I0507 19:55:40.386461    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.683184Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":1668}
	I0507 19:55:40.386569    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.694481Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0507 19:55:40.386629    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.704352Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"aac5eb588ad33a11","timeout":"7s"}
	I0507 19:55:40.386686    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.708328Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"aac5eb588ad33a11"}
	I0507 19:55:40.386686    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.708388Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"aac5eb588ad33a11","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	I0507 19:55:40.386686    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.710881Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	I0507 19:55:40.386686    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.711472Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0507 19:55:40.386845    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.71284Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0507 19:55:40.386845    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.712991Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0507 19:55:40.386845    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.713531Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aac5eb588ad33a11 switched to configuration voters=(12305500322378496529)"}
	I0507 19:55:40.386946    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.713649Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9263975694bef132","local-member-id":"aac5eb588ad33a11","added-peer-id":"aac5eb588ad33a11","added-peer-peer-urls":["https://172.19.143.74:2380"]}
	I0507 19:55:40.386989    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.714311Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9263975694bef132","local-member-id":"aac5eb588ad33a11","cluster-version":"3.5"}
	I0507 19:55:40.387059    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.714406Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0507 19:55:40.387135    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.727875Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0507 19:55:40.387174    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.733606Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.19.135.22:2380"}
	I0507 19:55:40.387226    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.733844Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.19.135.22:2380"}
	I0507 19:55:40.387295    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.734234Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"aac5eb588ad33a11","initial-advertise-peer-urls":["https://172.19.135.22:2380"],"listen-peer-urls":["https://172.19.135.22:2380"],"advertise-client-urls":["https://172.19.135.22:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.19.135.22:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0507 19:55:40.387295    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.735199Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0507 19:55:40.387376    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:32.251434Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aac5eb588ad33a11 is starting a new election at term 2"}
	I0507 19:55:40.387376    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:32.251481Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aac5eb588ad33a11 became pre-candidate at term 2"}
	I0507 19:55:40.387376    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:32.251511Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aac5eb588ad33a11 received MsgPreVoteResp from aac5eb588ad33a11 at term 2"}
	I0507 19:55:40.387643    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:32.251525Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aac5eb588ad33a11 became candidate at term 3"}
	I0507 19:55:40.387738    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:32.251534Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aac5eb588ad33a11 received MsgVoteResp from aac5eb588ad33a11 at term 3"}
	I0507 19:55:40.387738    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:32.251556Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aac5eb588ad33a11 became leader at term 3"}
	I0507 19:55:40.387738    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:32.251563Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aac5eb588ad33a11 elected leader aac5eb588ad33a11 at term 3"}
	I0507 19:55:40.387851    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:32.258987Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aac5eb588ad33a11","local-member-attributes":"{Name:multinode-600000 ClientURLs:[https://172.19.135.22:2379]}","request-path":"/0/members/aac5eb588ad33a11/attributes","cluster-id":"9263975694bef132","publish-timeout":"7s"}
	I0507 19:55:40.387851    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:32.259161Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0507 19:55:40.387931    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:32.259624Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0507 19:55:40.387931    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:32.259711Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0507 19:55:40.387931    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:32.259193Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0507 19:55:40.388029    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:32.263273Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.19.135.22:2379"}
	I0507 19:55:40.388076    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:32.265301Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0507 19:55:40.393858    5068 logs.go:123] Gathering logs for kube-scheduler [45341720d5be] ...
	I0507 19:55:40.393858    5068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45341720d5be"
	I0507 19:55:40.419732    5068 command_runner.go:130] ! I0507 19:54:30.888703       1 serving.go:380] Generated self-signed cert in-memory
	I0507 19:55:40.419732    5068 command_runner.go:130] ! W0507 19:54:33.652802       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0507 19:55:40.419732    5068 command_runner.go:130] ! W0507 19:54:33.652844       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0507 19:55:40.419732    5068 command_runner.go:130] ! W0507 19:54:33.652885       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0507 19:55:40.419732    5068 command_runner.go:130] ! W0507 19:54:33.652896       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0507 19:55:40.419732    5068 command_runner.go:130] ! I0507 19:54:33.748572       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0507 19:55:40.419732    5068 command_runner.go:130] ! I0507 19:54:33.749371       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0507 19:55:40.419732    5068 command_runner.go:130] ! I0507 19:54:33.757368       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0507 19:55:40.419732    5068 command_runner.go:130] ! I0507 19:54:33.758296       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0507 19:55:40.419732    5068 command_runner.go:130] ! I0507 19:54:33.758449       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0507 19:55:40.419732    5068 command_runner.go:130] ! I0507 19:54:33.759872       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0507 19:55:40.419732    5068 command_runner.go:130] ! I0507 19:54:33.860140       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0507 19:55:40.421927    5068 logs.go:123] Gathering logs for kube-scheduler [7cefdac2050f] ...
	I0507 19:55:40.421986    5068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cefdac2050f"
	I0507 19:55:40.448618    5068 command_runner.go:130] ! I0507 19:33:39.572817       1 serving.go:380] Generated self-signed cert in-memory
	I0507 19:55:40.448618    5068 command_runner.go:130] ! W0507 19:33:41.035488       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0507 19:55:40.448778    5068 command_runner.go:130] ! W0507 19:33:41.035523       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0507 19:55:40.448778    5068 command_runner.go:130] ! W0507 19:33:41.035535       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0507 19:55:40.448867    5068 command_runner.go:130] ! W0507 19:33:41.035542       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0507 19:55:40.448920    5068 command_runner.go:130] ! I0507 19:33:41.100225       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0507 19:55:40.448972    5068 command_runner.go:130] ! I0507 19:33:41.104133       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0507 19:55:40.448972    5068 command_runner.go:130] ! I0507 19:33:41.108249       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0507 19:55:40.449061    5068 command_runner.go:130] ! I0507 19:33:41.108399       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0507 19:55:40.449061    5068 command_runner.go:130] ! I0507 19:33:41.108383       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0507 19:55:40.449061    5068 command_runner.go:130] ! I0507 19:33:41.108658       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0507 19:55:40.449061    5068 command_runner.go:130] ! W0507 19:33:41.115439       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0507 19:55:40.449186    5068 command_runner.go:130] ! E0507 19:33:41.115515       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0507 19:55:40.449186    5068 command_runner.go:130] ! W0507 19:33:41.115737       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0507 19:55:40.449312    5068 command_runner.go:130] ! E0507 19:33:41.115969       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0507 19:55:40.449412    5068 command_runner.go:130] ! W0507 19:33:41.115744       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0507 19:55:40.449412    5068 command_runner.go:130] ! E0507 19:33:41.116415       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0507 19:55:40.449510    5068 command_runner.go:130] ! W0507 19:33:41.116670       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0507 19:55:40.449510    5068 command_runner.go:130] ! E0507 19:33:41.117593       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0507 19:55:40.449610    5068 command_runner.go:130] ! W0507 19:33:41.119709       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0507 19:55:40.449707    5068 command_runner.go:130] ! E0507 19:33:41.120474       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0507 19:55:40.449707    5068 command_runner.go:130] ! W0507 19:33:41.119953       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0507 19:55:40.449804    5068 command_runner.go:130] ! E0507 19:33:41.121523       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0507 19:55:40.449904    5068 command_runner.go:130] ! W0507 19:33:41.120191       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0507 19:55:40.449904    5068 command_runner.go:130] ! W0507 19:33:41.120237       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0507 19:55:40.450002    5068 command_runner.go:130] ! W0507 19:33:41.120278       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0507 19:55:40.450002    5068 command_runner.go:130] ! W0507 19:33:41.120316       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0507 19:55:40.450099    5068 command_runner.go:130] ! W0507 19:33:41.120339       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0507 19:55:40.450099    5068 command_runner.go:130] ! W0507 19:33:41.120384       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0507 19:55:40.450196    5068 command_runner.go:130] ! W0507 19:33:41.120417       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0507 19:55:40.450305    5068 command_runner.go:130] ! W0507 19:33:41.120451       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0507 19:55:40.450305    5068 command_runner.go:130] ! E0507 19:33:41.122419       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0507 19:55:40.450403    5068 command_runner.go:130] ! W0507 19:33:41.123409       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0507 19:55:40.450505    5068 command_runner.go:130] ! E0507 19:33:41.123928       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0507 19:55:40.450505    5068 command_runner.go:130] ! E0507 19:33:41.123939       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0507 19:55:40.450600    5068 command_runner.go:130] ! E0507 19:33:41.123946       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0507 19:55:40.450699    5068 command_runner.go:130] ! E0507 19:33:41.123954       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0507 19:55:40.450699    5068 command_runner.go:130] ! E0507 19:33:41.123963       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0507 19:55:40.450796    5068 command_runner.go:130] ! E0507 19:33:41.124140       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0507 19:55:40.450896    5068 command_runner.go:130] ! E0507 19:33:41.125875       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0507 19:55:40.450896    5068 command_runner.go:130] ! E0507 19:33:41.125886       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0507 19:55:40.450993    5068 command_runner.go:130] ! W0507 19:33:41.948129       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0507 19:55:40.450993    5068 command_runner.go:130] ! E0507 19:33:41.948157       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0507 19:55:40.451093    5068 command_runner.go:130] ! W0507 19:33:41.994257       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0507 19:55:40.451093    5068 command_runner.go:130] ! E0507 19:33:41.994824       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0507 19:55:40.451189    5068 command_runner.go:130] ! W0507 19:33:42.109252       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0507 19:55:40.451288    5068 command_runner.go:130] ! E0507 19:33:42.109623       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0507 19:55:40.451288    5068 command_runner.go:130] ! W0507 19:33:42.156561       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0507 19:55:40.451384    5068 command_runner.go:130] ! E0507 19:33:42.157128       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0507 19:55:40.451384    5068 command_runner.go:130] ! W0507 19:33:42.162271       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0507 19:55:40.451652    5068 command_runner.go:130] ! E0507 19:33:42.162599       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0507 19:55:40.451729    5068 command_runner.go:130] ! W0507 19:33:42.229371       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0507 19:55:40.451823    5068 command_runner.go:130] ! E0507 19:33:42.229525       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0507 19:55:40.451823    5068 command_runner.go:130] ! W0507 19:33:42.264429       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0507 19:55:40.451930    5068 command_runner.go:130] ! E0507 19:33:42.264596       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0507 19:55:40.451930    5068 command_runner.go:130] ! W0507 19:33:42.284763       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0507 19:55:40.452067    5068 command_runner.go:130] ! E0507 19:33:42.284872       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0507 19:55:40.452162    5068 command_runner.go:130] ! W0507 19:33:42.338396       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0507 19:55:40.452212    5068 command_runner.go:130] ! E0507 19:33:42.338683       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0507 19:55:40.452305    5068 command_runner.go:130] ! W0507 19:33:42.356861       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0507 19:55:40.452401    5068 command_runner.go:130] ! E0507 19:33:42.356964       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0507 19:55:40.452439    5068 command_runner.go:130] ! W0507 19:33:42.435844       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0507 19:55:40.452490    5068 command_runner.go:130] ! E0507 19:33:42.436739       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0507 19:55:40.452655    5068 command_runner.go:130] ! W0507 19:33:42.446379       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0507 19:55:40.452722    5068 command_runner.go:130] ! E0507 19:33:42.446557       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0507 19:55:40.452805    5068 command_runner.go:130] ! W0507 19:33:42.489593       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0507 19:55:40.452885    5068 command_runner.go:130] ! E0507 19:33:42.489896       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0507 19:55:40.452885    5068 command_runner.go:130] ! W0507 19:33:42.647287       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0507 19:55:40.452988    5068 command_runner.go:130] ! E0507 19:33:42.648065       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0507 19:55:40.452988    5068 command_runner.go:130] ! W0507 19:33:42.657928       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0507 19:55:40.453091    5068 command_runner.go:130] ! E0507 19:33:42.658018       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0507 19:55:40.453091    5068 command_runner.go:130] ! I0507 19:33:43.909008       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0507 19:55:40.453192    5068 command_runner.go:130] ! E0507 19:52:16.714078       1 run.go:74] "command failed" err="finished without leader elect"
	I0507 19:55:40.463953    5068 logs.go:123] Gathering logs for kube-proxy [aa9692c1fbd3] ...
	I0507 19:55:40.463953    5068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa9692c1fbd3"
	I0507 19:55:40.489272    5068 command_runner.go:130] ! I0507 19:33:59.788332       1 server_linux.go:69] "Using iptables proxy"
	I0507 19:55:40.490056    5068 command_runner.go:130] ! I0507 19:33:59.819474       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.19.143.74"]
	I0507 19:55:40.490056    5068 command_runner.go:130] ! I0507 19:33:59.872130       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0507 19:55:40.490242    5068 command_runner.go:130] ! I0507 19:33:59.872292       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0507 19:55:40.490242    5068 command_runner.go:130] ! I0507 19:33:59.872320       1 server_linux.go:165] "Using iptables Proxier"
	I0507 19:55:40.490242    5068 command_runner.go:130] ! I0507 19:33:59.878610       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0507 19:55:40.490351    5068 command_runner.go:130] ! I0507 19:33:59.879634       1 server.go:872] "Version info" version="v1.30.0"
	I0507 19:55:40.490351    5068 command_runner.go:130] ! I0507 19:33:59.879774       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0507 19:55:40.490351    5068 command_runner.go:130] ! I0507 19:33:59.883100       1 config.go:192] "Starting service config controller"
	I0507 19:55:40.490351    5068 command_runner.go:130] ! I0507 19:33:59.884238       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0507 19:55:40.490351    5068 command_runner.go:130] ! I0507 19:33:59.884310       1 config.go:101] "Starting endpoint slice config controller"
	I0507 19:55:40.490453    5068 command_runner.go:130] ! I0507 19:33:59.884544       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0507 19:55:40.490453    5068 command_runner.go:130] ! I0507 19:33:59.886801       1 config.go:319] "Starting node config controller"
	I0507 19:55:40.490453    5068 command_runner.go:130] ! I0507 19:33:59.888528       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0507 19:55:40.490453    5068 command_runner.go:130] ! I0507 19:33:59.985346       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0507 19:55:40.490536    5068 command_runner.go:130] ! I0507 19:33:59.985458       1 shared_informer.go:320] Caches are synced for service config
	I0507 19:55:40.490536    5068 command_runner.go:130] ! I0507 19:33:59.988897       1 shared_informer.go:320] Caches are synced for node config
	I0507 19:55:40.493314    5068 logs.go:123] Gathering logs for kube-controller-manager [922d1e2b8745] ...
	I0507 19:55:40.493403    5068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 922d1e2b8745"
	I0507 19:55:40.522475    5068 command_runner.go:130] ! I0507 19:54:31.703073       1 serving.go:380] Generated self-signed cert in-memory
	I0507 19:55:40.522475    5068 command_runner.go:130] ! I0507 19:54:32.356571       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0507 19:55:40.522475    5068 command_runner.go:130] ! I0507 19:54:32.356606       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0507 19:55:40.522475    5068 command_runner.go:130] ! I0507 19:54:32.361009       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0507 19:55:40.522475    5068 command_runner.go:130] ! I0507 19:54:32.362062       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0507 19:55:40.522475    5068 command_runner.go:130] ! I0507 19:54:32.362316       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0507 19:55:40.522475    5068 command_runner.go:130] ! I0507 19:54:32.362806       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0507 19:55:40.522475    5068 command_runner.go:130] ! I0507 19:54:35.660463       1 controllermanager.go:759] "Started controller" controller="serviceaccount-token-controller"
	I0507 19:55:40.522475    5068 command_runner.go:130] ! I0507 19:54:35.661512       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0507 19:55:40.522475    5068 command_runner.go:130] ! I0507 19:54:35.672846       1 controllermanager.go:759] "Started controller" controller="cronjob-controller"
	I0507 19:55:40.522475    5068 command_runner.go:130] ! I0507 19:54:35.673901       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0507 19:55:40.522475    5068 command_runner.go:130] ! I0507 19:54:35.674100       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0507 19:55:40.522475    5068 command_runner.go:130] ! I0507 19:54:35.677134       1 controllermanager.go:759] "Started controller" controller="ttl-controller"
	I0507 19:55:40.522475    5068 command_runner.go:130] ! I0507 19:54:35.677224       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0507 19:55:40.522475    5068 command_runner.go:130] ! I0507 19:54:35.677646       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0507 19:55:40.522475    5068 command_runner.go:130] ! I0507 19:54:35.687463       1 controllermanager.go:759] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0507 19:55:40.522475    5068 command_runner.go:130] ! I0507 19:54:35.690256       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0507 19:55:40.522475    5068 command_runner.go:130] ! I0507 19:54:35.690418       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0507 19:55:40.522475    5068 command_runner.go:130] ! I0507 19:54:35.693293       1 controllermanager.go:759] "Started controller" controller="serviceaccount-controller"
	I0507 19:55:40.522475    5068 command_runner.go:130] ! I0507 19:54:35.693482       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0507 19:55:40.522475    5068 command_runner.go:130] ! I0507 19:54:35.693648       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0507 19:55:40.522475    5068 command_runner.go:130] ! I0507 19:54:35.705135       1 controllermanager.go:759] "Started controller" controller="garbage-collector-controller"
	I0507 19:55:40.522475    5068 command_runner.go:130] ! I0507 19:54:35.705560       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0507 19:55:40.522475    5068 command_runner.go:130] ! I0507 19:54:35.705715       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0507 19:55:40.522475    5068 command_runner.go:130] ! I0507 19:54:35.707645       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0507 19:55:40.523159    5068 command_runner.go:130] ! I0507 19:54:35.714544       1 controllermanager.go:759] "Started controller" controller="daemonset-controller"
	I0507 19:55:40.523159    5068 command_runner.go:130] ! I0507 19:54:35.714950       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0507 19:55:40.523159    5068 command_runner.go:130] ! I0507 19:54:35.714979       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0507 19:55:40.523159    5068 command_runner.go:130] ! I0507 19:54:35.718207       1 controllermanager.go:759] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0507 19:55:40.523159    5068 command_runner.go:130] ! I0507 19:54:35.718555       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0507 19:55:40.523159    5068 command_runner.go:130] ! I0507 19:54:35.719592       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0507 19:55:40.523159    5068 command_runner.go:130] ! I0507 19:54:35.721267       1 controllermanager.go:759] "Started controller" controller="statefulset-controller"
	I0507 19:55:40.523159    5068 command_runner.go:130] ! I0507 19:54:35.722621       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0507 19:55:40.523159    5068 command_runner.go:130] ! I0507 19:54:35.722870       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0507 19:55:40.523159    5068 command_runner.go:130] ! I0507 19:54:35.725345       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0507 19:55:40.523414    5068 command_runner.go:130] ! I0507 19:54:35.725516       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0507 19:55:40.523414    5068 command_runner.go:130] ! I0507 19:54:35.727155       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0507 19:55:40.523414    5068 command_runner.go:130] ! I0507 19:54:35.732889       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0507 19:55:40.523414    5068 command_runner.go:130] ! I0507 19:54:35.733036       1 controllermanager.go:759] "Started controller" controller="node-lifecycle-controller"
	I0507 19:55:40.523517    5068 command_runner.go:130] ! I0507 19:54:35.733340       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0507 19:55:40.523517    5068 command_runner.go:130] ! I0507 19:54:35.733465       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0507 19:55:40.523517    5068 command_runner.go:130] ! I0507 19:54:35.734424       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0507 19:55:40.523517    5068 command_runner.go:130] ! I0507 19:54:35.739429       1 controllermanager.go:759] "Started controller" controller="token-cleaner-controller"
	I0507 19:55:40.523517    5068 command_runner.go:130] ! I0507 19:54:35.740234       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0507 19:55:40.523640    5068 command_runner.go:130] ! I0507 19:54:35.740690       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0507 19:55:40.523640    5068 command_runner.go:130] ! I0507 19:54:35.740915       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0507 19:55:40.523640    5068 command_runner.go:130] ! E0507 19:54:35.758883       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0507 19:55:40.523640    5068 command_runner.go:130] ! I0507 19:54:35.759554       1 controllermanager.go:737] "Warning: skipping controller" controller="service-lb-controller"
	I0507 19:55:40.523745    5068 command_runner.go:130] ! I0507 19:54:35.764996       1 shared_informer.go:320] Caches are synced for tokens
	I0507 19:55:40.523745    5068 command_runner.go:130] ! I0507 19:54:35.770304       1 controllermanager.go:759] "Started controller" controller="persistentvolume-expander-controller"
	I0507 19:55:40.523745    5068 command_runner.go:130] ! I0507 19:54:35.770613       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0507 19:55:40.523745    5068 command_runner.go:130] ! I0507 19:54:35.771644       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0507 19:55:40.523745    5068 command_runner.go:130] ! I0507 19:54:35.773532       1 controllermanager.go:759] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0507 19:55:40.523853    5068 command_runner.go:130] ! I0507 19:54:35.773999       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0507 19:55:40.523853    5068 command_runner.go:130] ! I0507 19:54:35.776366       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0507 19:55:40.523853    5068 command_runner.go:130] ! I0507 19:54:35.776291       1 controllermanager.go:759] "Started controller" controller="pod-garbage-collector-controller"
	I0507 19:55:40.523853    5068 command_runner.go:130] ! I0507 19:54:35.777049       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0507 19:55:40.523967    5068 command_runner.go:130] ! I0507 19:54:35.778718       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0507 19:55:40.523967    5068 command_runner.go:130] ! I0507 19:54:35.782053       1 controllermanager.go:759] "Started controller" controller="disruption-controller"
	I0507 19:55:40.523967    5068 command_runner.go:130] ! I0507 19:54:35.782295       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0507 19:55:40.523967    5068 command_runner.go:130] ! I0507 19:54:35.783178       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0507 19:55:40.523967    5068 command_runner.go:130] ! I0507 19:54:35.783590       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0507 19:55:40.524074    5068 command_runner.go:130] ! I0507 19:54:35.785509       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0507 19:55:40.524074    5068 command_runner.go:130] ! I0507 19:54:35.785650       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0507 19:55:40.524074    5068 command_runner.go:130] ! I0507 19:54:35.785771       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0507 19:55:40.524172    5068 command_runner.go:130] ! I0507 19:54:35.786304       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0507 19:55:40.524172    5068 command_runner.go:130] ! I0507 19:54:35.786711       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0507 19:55:40.524172    5068 command_runner.go:130] ! I0507 19:54:35.788143       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0507 19:55:40.524172    5068 command_runner.go:130] ! I0507 19:54:35.788161       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0507 19:55:40.524278    5068 command_runner.go:130] ! I0507 19:54:35.788891       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0507 19:55:40.524278    5068 command_runner.go:130] ! I0507 19:54:35.788187       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0507 19:55:40.524278    5068 command_runner.go:130] ! I0507 19:54:35.788425       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0507 19:55:40.524278    5068 command_runner.go:130] ! I0507 19:54:35.789279       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0507 19:55:40.524278    5068 command_runner.go:130] ! I0507 19:54:35.788437       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0507 19:55:40.524278    5068 command_runner.go:130] ! I0507 19:54:35.788403       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0507 19:55:40.524816    5068 command_runner.go:130] ! E0507 19:54:35.794689       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0507 19:55:40.524816    5068 command_runner.go:130] ! I0507 19:54:35.794706       1 controllermanager.go:737] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0507 19:55:40.524816    5068 command_runner.go:130] ! I0507 19:54:35.797181       1 controllermanager.go:759] "Started controller" controller="persistentvolume-binder-controller"
	I0507 19:55:40.524927    5068 command_runner.go:130] ! I0507 19:54:35.797390       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0507 19:55:40.524927    5068 command_runner.go:130] ! I0507 19:54:35.797366       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0507 19:55:40.524994    5068 command_runner.go:130] ! I0507 19:54:35.798435       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0507 19:55:40.524994    5068 command_runner.go:130] ! I0507 19:54:35.799150       1 controllermanager.go:759] "Started controller" controller="taint-eviction-controller"
	I0507 19:55:40.525059    5068 command_runner.go:130] ! I0507 19:54:35.799419       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0507 19:55:40.525059    5068 command_runner.go:130] ! I0507 19:54:35.800319       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0507 19:55:40.525059    5068 command_runner.go:130] ! I0507 19:54:35.800396       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0507 19:55:40.525059    5068 command_runner.go:130] ! I0507 19:54:35.801149       1 controllermanager.go:759] "Started controller" controller="replicationcontroller-controller"
	I0507 19:55:40.525159    5068 command_runner.go:130] ! I0507 19:54:35.801340       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0507 19:55:40.525159    5068 command_runner.go:130] ! I0507 19:54:35.805459       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0507 19:55:40.525159    5068 command_runner.go:130] ! I0507 19:54:35.806312       1 controllermanager.go:759] "Started controller" controller="deployment-controller"
	I0507 19:55:40.525159    5068 command_runner.go:130] ! I0507 19:54:35.806898       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0507 19:55:40.525159    5068 command_runner.go:130] ! I0507 19:54:35.806915       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0507 19:55:40.525278    5068 command_runner.go:130] ! I0507 19:54:35.820458       1 controllermanager.go:759] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0507 19:55:40.525278    5068 command_runner.go:130] ! I0507 19:54:35.823993       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0507 19:55:40.525278    5068 command_runner.go:130] ! I0507 19:54:35.824174       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0507 19:55:40.525278    5068 command_runner.go:130] ! I0507 19:54:45.843537       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0507 19:55:40.525278    5068 command_runner.go:130] ! I0507 19:54:45.845601       1 controllermanager.go:759] "Started controller" controller="node-ipam-controller"
	I0507 19:55:40.525388    5068 command_runner.go:130] ! I0507 19:54:45.845839       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0507 19:55:40.525388    5068 command_runner.go:130] ! I0507 19:54:45.846020       1 shared_informer.go:313] Waiting for caches to sync for node
	I0507 19:55:40.525388    5068 command_runner.go:130] ! I0507 19:54:45.856361       1 controllermanager.go:759] "Started controller" controller="persistentvolume-protection-controller"
	I0507 19:55:40.525388    5068 command_runner.go:130] ! I0507 19:54:45.856445       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0507 19:55:40.525388    5068 command_runner.go:130] ! I0507 19:54:45.856582       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0507 19:55:40.525388    5068 command_runner.go:130] ! I0507 19:54:45.860605       1 controllermanager.go:759] "Started controller" controller="ttl-after-finished-controller"
	I0507 19:55:40.525518    5068 command_runner.go:130] ! I0507 19:54:45.861230       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0507 19:55:40.525518    5068 command_runner.go:130] ! I0507 19:54:45.861688       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0507 19:55:40.525518    5068 command_runner.go:130] ! I0507 19:54:45.882679       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0507 19:55:40.525518    5068 command_runner.go:130] ! I0507 19:54:45.882882       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0507 19:55:40.525642    5068 command_runner.go:130] ! I0507 19:54:45.883004       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0507 19:55:40.525642    5068 command_runner.go:130] ! I0507 19:54:45.883100       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0507 19:55:40.525642    5068 command_runner.go:130] ! I0507 19:54:45.883309       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0507 19:55:40.525642    5068 command_runner.go:130] ! I0507 19:54:45.883768       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0507 19:55:40.525642    5068 command_runner.go:130] ! I0507 19:54:45.884103       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0507 19:55:40.525642    5068 command_runner.go:130] ! I0507 19:54:45.884144       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0507 19:55:40.525780    5068 command_runner.go:130] ! I0507 19:54:45.884169       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0507 19:55:40.525780    5068 command_runner.go:130] ! I0507 19:54:45.884544       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0507 19:55:40.525780    5068 command_runner.go:130] ! I0507 19:54:45.884707       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0507 19:55:40.525780    5068 command_runner.go:130] ! I0507 19:54:45.884806       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0507 19:55:40.525923    5068 command_runner.go:130] ! I0507 19:54:45.884934       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0507 19:55:40.525923    5068 command_runner.go:130] ! I0507 19:54:45.884999       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0507 19:55:40.525923    5068 command_runner.go:130] ! I0507 19:54:45.885027       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0507 19:55:40.525923    5068 command_runner.go:130] ! I0507 19:54:45.885214       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0507 19:55:40.526124    5068 command_runner.go:130] ! I0507 19:54:45.885361       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0507 19:55:40.526223    5068 command_runner.go:130] ! I0507 19:54:45.885395       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0507 19:55:40.526257    5068 command_runner.go:130] ! I0507 19:54:45.885452       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0507 19:55:40.526294    5068 command_runner.go:130] ! I0507 19:54:45.885513       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0507 19:55:40.526375    5068 command_runner.go:130] ! I0507 19:54:45.885658       1 controllermanager.go:759] "Started controller" controller="resourcequota-controller"
	I0507 19:55:40.526375    5068 command_runner.go:130] ! I0507 19:54:45.885798       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0507 19:55:40.526450    5068 command_runner.go:130] ! I0507 19:54:45.885854       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0507 19:55:40.526488    5068 command_runner.go:130] ! I0507 19:54:45.885875       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0507 19:55:40.526488    5068 command_runner.go:130] ! I0507 19:54:45.888915       1 controllermanager.go:759] "Started controller" controller="replicaset-controller"
	I0507 19:55:40.526488    5068 command_runner.go:130] ! I0507 19:54:45.890326       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0507 19:55:40.526488    5068 command_runner.go:130] ! I0507 19:54:45.890549       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0507 19:55:40.526564    5068 command_runner.go:130] ! I0507 19:54:45.892442       1 controllermanager.go:759] "Started controller" controller="bootstrap-signer-controller"
	I0507 19:55:40.526564    5068 command_runner.go:130] ! I0507 19:54:45.892857       1 controllermanager.go:737] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0507 19:55:40.526564    5068 command_runner.go:130] ! I0507 19:54:45.892697       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0507 19:55:40.526646    5068 command_runner.go:130] ! I0507 19:54:45.895556       1 controllermanager.go:759] "Started controller" controller="endpointslice-controller"
	I0507 19:55:40.526646    5068 command_runner.go:130] ! I0507 19:54:45.896185       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0507 19:55:40.526646    5068 command_runner.go:130] ! I0507 19:54:45.896210       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0507 19:55:40.526646    5068 command_runner.go:130] ! I0507 19:54:45.898050       1 controllermanager.go:759] "Started controller" controller="endpointslice-mirroring-controller"
	I0507 19:55:40.526730    5068 command_runner.go:130] ! I0507 19:54:45.898440       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0507 19:55:40.526730    5068 command_runner.go:130] ! I0507 19:54:45.898466       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0507 19:55:40.526730    5068 command_runner.go:130] ! I0507 19:54:45.901016       1 controllermanager.go:759] "Started controller" controller="clusterrole-aggregation-controller"
	I0507 19:55:40.526818    5068 command_runner.go:130] ! I0507 19:54:45.901365       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0507 19:55:40.526818    5068 command_runner.go:130] ! I0507 19:54:45.901496       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0507 19:55:40.526818    5068 command_runner.go:130] ! I0507 19:54:45.904035       1 controllermanager.go:759] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0507 19:55:40.526818    5068 command_runner.go:130] ! I0507 19:54:45.906504       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0507 19:55:40.526818    5068 command_runner.go:130] ! I0507 19:54:45.906590       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0507 19:55:40.526923    5068 command_runner.go:130] ! I0507 19:54:45.936436       1 controllermanager.go:759] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0507 19:55:40.526923    5068 command_runner.go:130] ! I0507 19:54:45.936514       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0507 19:55:40.526923    5068 command_runner.go:130] ! I0507 19:54:45.936644       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0507 19:55:40.526923    5068 command_runner.go:130] ! I0507 19:54:45.950622       1 controllermanager.go:759] "Started controller" controller="namespace-controller"
	I0507 19:55:40.527022    5068 command_runner.go:130] ! I0507 19:54:45.950687       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0507 19:55:40.527022    5068 command_runner.go:130] ! I0507 19:54:45.952156       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0507 19:55:40.527022    5068 command_runner.go:130] ! I0507 19:54:45.960379       1 controllermanager.go:759] "Started controller" controller="job-controller"
	I0507 19:55:40.527022    5068 command_runner.go:130] ! I0507 19:54:45.960563       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0507 19:55:40.527123    5068 command_runner.go:130] ! I0507 19:54:45.960800       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0507 19:55:40.527123    5068 command_runner.go:130] ! I0507 19:54:45.960885       1 controllermanager.go:737] "Warning: skipping controller" controller="node-route-controller"
	I0507 19:55:40.527123    5068 command_runner.go:130] ! I0507 19:54:45.960448       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0507 19:55:40.527123    5068 command_runner.go:130] ! I0507 19:54:45.960996       1 shared_informer.go:313] Waiting for caches to sync for job
	I0507 19:55:40.527224    5068 command_runner.go:130] ! I0507 19:54:45.964056       1 controllermanager.go:759] "Started controller" controller="ephemeral-volume-controller"
	I0507 19:55:40.527224    5068 command_runner.go:130] ! I0507 19:54:45.964077       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0507 19:55:40.527224    5068 command_runner.go:130] ! I0507 19:54:45.964454       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0507 19:55:40.527224    5068 command_runner.go:130] ! I0507 19:54:45.967293       1 controllermanager.go:759] "Started controller" controller="endpoints-controller"
	I0507 19:55:40.527301    5068 command_runner.go:130] ! I0507 19:54:45.967699       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0507 19:55:40.527362    5068 command_runner.go:130] ! I0507 19:54:45.967884       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0507 19:55:40.527362    5068 command_runner.go:130] ! I0507 19:54:45.969920       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0507 19:55:40.527362    5068 command_runner.go:130] ! I0507 19:54:45.969950       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0507 19:55:40.527362    5068 command_runner.go:130] ! I0507 19:54:45.979639       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0507 19:55:40.527362    5068 command_runner.go:130] ! I0507 19:54:45.993084       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0507 19:55:40.527362    5068 command_runner.go:130] ! I0507 19:54:45.993911       1 shared_informer.go:320] Caches are synced for service account
	I0507 19:55:40.527362    5068 command_runner.go:130] ! I0507 19:54:46.001799       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0507 19:55:40.527362    5068 command_runner.go:130] ! I0507 19:54:46.002705       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0507 19:55:40.527362    5068 command_runner.go:130] ! I0507 19:54:46.006101       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0507 19:55:40.527362    5068 command_runner.go:130] ! I0507 19:54:46.008805       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0507 19:55:40.527362    5068 command_runner.go:130] ! I0507 19:54:46.014352       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0507 19:55:40.527362    5068 command_runner.go:130] ! I0507 19:54:46.021643       1 shared_informer.go:320] Caches are synced for crt configmap
	I0507 19:55:40.527362    5068 command_runner.go:130] ! I0507 19:54:46.023805       1 shared_informer.go:320] Caches are synced for stateful set
	I0507 19:55:40.527362    5068 command_runner.go:130] ! I0507 19:54:46.027827       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0507 19:55:40.527362    5068 command_runner.go:130] ! I0507 19:54:46.052799       1 shared_informer.go:320] Caches are synced for namespace
	I0507 19:55:40.527362    5068 command_runner.go:130] ! I0507 19:54:46.056820       1 shared_informer.go:320] Caches are synced for PV protection
	I0507 19:55:40.527362    5068 command_runner.go:130] ! I0507 19:54:46.062319       1 shared_informer.go:320] Caches are synced for job
	I0507 19:55:40.527362    5068 command_runner.go:130] ! I0507 19:54:46.062392       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0507 19:55:40.527362    5068 command_runner.go:130] ! I0507 19:54:46.065647       1 shared_informer.go:320] Caches are synced for ephemeral
	I0507 19:55:40.527362    5068 command_runner.go:130] ! I0507 19:54:46.068108       1 shared_informer.go:320] Caches are synced for endpoint
	I0507 19:55:40.527362    5068 command_runner.go:130] ! I0507 19:54:46.072892       1 shared_informer.go:320] Caches are synced for expand
	I0507 19:55:40.527362    5068 command_runner.go:130] ! I0507 19:54:46.075814       1 shared_informer.go:320] Caches are synced for cronjob
	I0507 19:55:40.527362    5068 command_runner.go:130] ! I0507 19:54:46.077269       1 shared_informer.go:320] Caches are synced for PVC protection
	I0507 19:55:40.527362    5068 command_runner.go:130] ! I0507 19:54:46.085427       1 shared_informer.go:320] Caches are synced for disruption
	I0507 19:55:40.527362    5068 command_runner.go:130] ! I0507 19:54:46.086039       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0507 19:55:40.527362    5068 command_runner.go:130] ! I0507 19:54:46.089158       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0507 19:55:40.527362    5068 command_runner.go:130] ! I0507 19:54:46.089172       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0507 19:55:40.527362    5068 command_runner.go:130] ! I0507 19:54:46.089394       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0507 19:55:40.527362    5068 command_runner.go:130] ! I0507 19:54:46.091216       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0507 19:55:40.527890    5068 command_runner.go:130] ! I0507 19:54:46.107002       1 shared_informer.go:320] Caches are synced for deployment
	I0507 19:55:40.527890    5068 command_runner.go:130] ! I0507 19:54:46.116997       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="25.691909ms"
	I0507 19:55:40.527890    5068 command_runner.go:130] ! I0507 19:54:46.118004       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="76.006µs"
	I0507 19:55:40.527977    5068 command_runner.go:130] ! I0507 19:54:46.123476       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="32.139964ms"
	I0507 19:55:40.527977    5068 command_runner.go:130] ! I0507 19:54:46.124362       1 shared_informer.go:320] Caches are synced for HPA
	I0507 19:55:40.527977    5068 command_runner.go:130] ! I0507 19:54:46.124468       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="121.91µs"
	I0507 19:55:40.527977    5068 command_runner.go:130] ! I0507 19:54:46.181088       1 shared_informer.go:320] Caches are synced for resource quota
	I0507 19:55:40.527977    5068 command_runner.go:130] ! I0507 19:54:46.189327       1 shared_informer.go:320] Caches are synced for resource quota
	I0507 19:55:40.528054    5068 command_runner.go:130] ! I0507 19:54:46.228301       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-600000-m02"
	I0507 19:55:40.528054    5068 command_runner.go:130] ! I0507 19:54:46.229031       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-600000-m02"
	I0507 19:55:40.528054    5068 command_runner.go:130] ! I0507 19:54:46.229515       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-600000-m02"
	I0507 19:55:40.528131    5068 command_runner.go:130] ! I0507 19:54:46.229843       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-600000\" does not exist"
	I0507 19:55:40.528131    5068 command_runner.go:130] ! I0507 19:54:46.229885       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-600000-m02\" does not exist"
	I0507 19:55:40.528205    5068 command_runner.go:130] ! I0507 19:54:46.229901       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-600000-m03\" does not exist"
	I0507 19:55:40.528205    5068 command_runner.go:130] ! I0507 19:54:46.234886       1 shared_informer.go:320] Caches are synced for taint
	I0507 19:55:40.528205    5068 command_runner.go:130] ! I0507 19:54:46.235155       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0507 19:55:40.528205    5068 command_runner.go:130] ! I0507 19:54:46.237527       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0507 19:55:40.528279    5068 command_runner.go:130] ! I0507 19:54:46.249515       1 shared_informer.go:320] Caches are synced for node
	I0507 19:55:40.528279    5068 command_runner.go:130] ! I0507 19:54:46.249660       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0507 19:55:40.528279    5068 command_runner.go:130] ! I0507 19:54:46.249700       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0507 19:55:40.528279    5068 command_runner.go:130] ! I0507 19:54:46.249711       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0507 19:55:40.528279    5068 command_runner.go:130] ! I0507 19:54:46.249718       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0507 19:55:40.528352    5068 command_runner.go:130] ! I0507 19:54:46.261687       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-600000"
	I0507 19:55:40.528352    5068 command_runner.go:130] ! I0507 19:54:46.261718       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-600000-m02"
	I0507 19:55:40.528352    5068 command_runner.go:130] ! I0507 19:54:46.261950       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-600000-m03"
	I0507 19:55:40.528426    5068 command_runner.go:130] ! I0507 19:54:46.263203       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0507 19:55:40.528426    5068 command_runner.go:130] ! I0507 19:54:46.282864       1 shared_informer.go:320] Caches are synced for GC
	I0507 19:55:40.528426    5068 command_runner.go:130] ! I0507 19:54:46.282948       1 shared_informer.go:320] Caches are synced for TTL
	I0507 19:55:40.528426    5068 command_runner.go:130] ! I0507 19:54:46.291375       1 shared_informer.go:320] Caches are synced for attach detach
	I0507 19:55:40.528426    5068 command_runner.go:130] ! I0507 19:54:46.296389       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0507 19:55:40.528499    5068 command_runner.go:130] ! I0507 19:54:46.299531       1 shared_informer.go:320] Caches are synced for persistent volume
	I0507 19:55:40.528499    5068 command_runner.go:130] ! I0507 19:54:46.301547       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0507 19:55:40.528499    5068 command_runner.go:130] ! I0507 19:54:46.315610       1 shared_informer.go:320] Caches are synced for daemon sets
	I0507 19:55:40.528499    5068 command_runner.go:130] ! I0507 19:54:46.707389       1 shared_informer.go:320] Caches are synced for garbage collector
	I0507 19:55:40.528499    5068 command_runner.go:130] ! I0507 19:54:46.707484       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0507 19:55:40.528577    5068 command_runner.go:130] ! I0507 19:54:46.714879       1 shared_informer.go:320] Caches are synced for garbage collector
	I0507 19:55:40.528577    5068 command_runner.go:130] ! I0507 19:55:09.379932       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-600000-m02"
	I0507 19:55:40.528577    5068 command_runner.go:130] ! I0507 19:55:26.356626       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.170086ms"
	I0507 19:55:40.528577    5068 command_runner.go:130] ! I0507 19:55:26.358052       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.002µs"
	I0507 19:55:40.528650    5068 command_runner.go:130] ! I0507 19:55:38.936045       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="86.905µs"
	I0507 19:55:40.528650    5068 command_runner.go:130] ! I0507 19:55:38.982779       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="30.443975ms"
	I0507 19:55:40.528650    5068 command_runner.go:130] ! I0507 19:55:38.983177       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="43.503µs"
	I0507 19:55:40.528723    5068 command_runner.go:130] ! I0507 19:55:39.007447       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.25642ms"
	I0507 19:55:40.528723    5068 command_runner.go:130] ! I0507 19:55:39.007824       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="337.32µs"
	I0507 19:55:40.542868    5068 logs.go:123] Gathering logs for kube-controller-manager [3067f16e2e38] ...
	I0507 19:55:40.542868    5068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3067f16e2e38"
	I0507 19:55:40.576510    5068 command_runner.go:130] ! I0507 19:33:39.646652       1 serving.go:380] Generated self-signed cert in-memory
	I0507 19:55:40.577192    5068 command_runner.go:130] ! I0507 19:33:40.017908       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0507 19:55:40.577192    5068 command_runner.go:130] ! I0507 19:33:40.018051       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0507 19:55:40.577354    5068 command_runner.go:130] ! I0507 19:33:40.019973       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0507 19:55:40.577354    5068 command_runner.go:130] ! I0507 19:33:40.020228       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0507 19:55:40.577354    5068 command_runner.go:130] ! I0507 19:33:40.023071       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0507 19:55:40.577354    5068 command_runner.go:130] ! I0507 19:33:40.024192       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0507 19:55:40.577354    5068 command_runner.go:130] ! I0507 19:33:44.035484       1 controllermanager.go:759] "Started controller" controller="serviceaccount-token-controller"
	I0507 19:55:40.577354    5068 command_runner.go:130] ! I0507 19:33:44.035669       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0507 19:55:40.577354    5068 command_runner.go:130] ! I0507 19:33:44.062270       1 controllermanager.go:759] "Started controller" controller="pod-garbage-collector-controller"
	I0507 19:55:40.577354    5068 command_runner.go:130] ! I0507 19:33:44.062488       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0507 19:55:40.577354    5068 command_runner.go:130] ! I0507 19:33:44.062501       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0507 19:55:40.577354    5068 command_runner.go:130] ! I0507 19:33:44.082052       1 controllermanager.go:759] "Started controller" controller="serviceaccount-controller"
	I0507 19:55:40.577354    5068 command_runner.go:130] ! I0507 19:33:44.082328       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0507 19:55:40.577640    5068 command_runner.go:130] ! I0507 19:33:44.082342       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0507 19:55:40.577640    5068 command_runner.go:130] ! I0507 19:33:44.097853       1 controllermanager.go:759] "Started controller" controller="daemonset-controller"
	I0507 19:55:40.577694    5068 command_runner.go:130] ! I0507 19:33:44.100760       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0507 19:55:40.577694    5068 command_runner.go:130] ! I0507 19:33:44.101645       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0507 19:55:40.577694    5068 command_runner.go:130] ! I0507 19:33:44.135768       1 shared_informer.go:320] Caches are synced for tokens
	I0507 19:55:40.577694    5068 command_runner.go:130] ! I0507 19:33:44.143316       1 controllermanager.go:759] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0507 19:55:40.577694    5068 command_runner.go:130] ! I0507 19:33:44.143654       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0507 19:55:40.577694    5068 command_runner.go:130] ! I0507 19:33:44.143854       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0507 19:55:40.577694    5068 command_runner.go:130] ! I0507 19:33:44.156569       1 controllermanager.go:759] "Started controller" controller="statefulset-controller"
	I0507 19:55:40.577694    5068 command_runner.go:130] ! I0507 19:33:44.156806       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0507 19:55:40.577694    5068 command_runner.go:130] ! I0507 19:33:44.156821       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0507 19:55:40.577694    5068 command_runner.go:130] ! I0507 19:33:44.193774       1 controllermanager.go:759] "Started controller" controller="bootstrap-signer-controller"
	I0507 19:55:40.577694    5068 command_runner.go:130] ! I0507 19:33:44.194041       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0507 19:55:40.577694    5068 command_runner.go:130] ! I0507 19:33:44.224957       1 controllermanager.go:759] "Started controller" controller="endpointslice-mirroring-controller"
	I0507 19:55:40.577694    5068 command_runner.go:130] ! I0507 19:33:44.225326       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0507 19:55:40.577694    5068 command_runner.go:130] ! I0507 19:33:44.225340       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0507 19:55:40.577694    5068 command_runner.go:130] ! I0507 19:33:44.264579       1 controllermanager.go:759] "Started controller" controller="replicationcontroller-controller"
	I0507 19:55:40.578095    5068 command_runner.go:130] ! I0507 19:33:44.265097       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0507 19:55:40.578163    5068 command_runner.go:130] ! I0507 19:33:44.265116       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0507 19:55:40.578287    5068 command_runner.go:130] ! I0507 19:33:44.287038       1 controllermanager.go:759] "Started controller" controller="persistentvolume-binder-controller"
	I0507 19:55:40.578323    5068 command_runner.go:130] ! I0507 19:33:44.287393       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0507 19:55:40.578433    5068 command_runner.go:130] ! I0507 19:33:44.287436       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0507 19:55:40.578488    5068 command_runner.go:130] ! I0507 19:33:44.356902       1 controllermanager.go:759] "Started controller" controller="ttl-controller"
	I0507 19:55:40.578525    5068 command_runner.go:130] ! I0507 19:33:44.357443       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0507 19:55:40.578567    5068 command_runner.go:130] ! I0507 19:33:44.357459       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0507 19:55:40.578567    5068 command_runner.go:130] ! E0507 19:33:44.380020       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0507 19:55:40.578651    5068 command_runner.go:130] ! I0507 19:33:44.380113       1 controllermanager.go:737] "Warning: skipping controller" controller="service-lb-controller"
	I0507 19:55:40.578651    5068 command_runner.go:130] ! I0507 19:33:44.504313       1 controllermanager.go:759] "Started controller" controller="clusterrole-aggregation-controller"
	I0507 19:55:40.578717    5068 command_runner.go:130] ! I0507 19:33:44.504889       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0507 19:55:40.578717    5068 command_runner.go:130] ! I0507 19:33:44.504939       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0507 19:55:40.578779    5068 command_runner.go:130] ! I0507 19:33:44.642194       1 controllermanager.go:759] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0507 19:55:40.578779    5068 command_runner.go:130] ! I0507 19:33:44.642248       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0507 19:55:40.578779    5068 command_runner.go:130] ! I0507 19:33:44.642259       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0507 19:55:40.578779    5068 command_runner.go:130] ! I0507 19:33:44.952758       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0507 19:55:40.578779    5068 command_runner.go:130] ! I0507 19:33:44.952894       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0507 19:55:40.578779    5068 command_runner.go:130] ! I0507 19:33:44.952916       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0507 19:55:40.578779    5068 command_runner.go:130] ! I0507 19:33:44.952951       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0507 19:55:40.578968    5068 command_runner.go:130] ! I0507 19:33:44.952971       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0507 19:55:40.579029    5068 command_runner.go:130] ! I0507 19:33:44.953093       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0507 19:55:40.579029    5068 command_runner.go:130] ! I0507 19:33:44.953113       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0507 19:55:40.579107    5068 command_runner.go:130] ! I0507 19:33:44.953131       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0507 19:55:40.579140    5068 command_runner.go:130] ! I0507 19:33:44.953150       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0507 19:55:40.579140    5068 command_runner.go:130] ! I0507 19:33:44.953173       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0507 19:55:40.579140    5068 command_runner.go:130] ! I0507 19:33:44.953207       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0507 19:55:40.579208    5068 command_runner.go:130] ! I0507 19:33:44.953385       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0507 19:55:40.579232    5068 command_runner.go:130] ! I0507 19:33:44.953527       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0507 19:55:40.579232    5068 command_runner.go:130] ! I0507 19:33:44.953695       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0507 19:55:40.579232    5068 command_runner.go:130] ! I0507 19:33:44.953874       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0507 19:55:40.579303    5068 command_runner.go:130] ! I0507 19:33:44.954040       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0507 19:55:40.579335    5068 command_runner.go:130] ! I0507 19:33:44.954064       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0507 19:55:40.579335    5068 command_runner.go:130] ! I0507 19:33:44.954206       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0507 19:55:40.579383    5068 command_runner.go:130] ! I0507 19:33:44.954278       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0507 19:55:40.579383    5068 command_runner.go:130] ! I0507 19:33:44.954308       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0507 19:55:40.579421    5068 command_runner.go:130] ! I0507 19:33:44.954374       1 controllermanager.go:759] "Started controller" controller="resourcequota-controller"
	I0507 19:55:40.579460    5068 command_runner.go:130] ! I0507 19:33:44.954592       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0507 19:55:40.579460    5068 command_runner.go:130] ! I0507 19:33:44.954813       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0507 19:55:40.579460    5068 command_runner.go:130] ! I0507 19:33:44.954968       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0507 19:55:40.579460    5068 command_runner.go:130] ! I0507 19:33:44.959507       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0507 19:55:40.579534    5068 command_runner.go:130] ! I0507 19:33:45.092915       1 controllermanager.go:759] "Started controller" controller="deployment-controller"
	I0507 19:55:40.579534    5068 command_runner.go:130] ! I0507 19:33:45.092938       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0507 19:55:40.579565    5068 command_runner.go:130] ! I0507 19:33:45.092974       1 controllermanager.go:737] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0507 19:55:40.579614    5068 command_runner.go:130] ! I0507 19:33:45.093078       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0507 19:55:40.579614    5068 command_runner.go:130] ! I0507 19:33:45.093089       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0507 19:55:40.579614    5068 command_runner.go:130] ! I0507 19:33:45.248481       1 controllermanager.go:759] "Started controller" controller="job-controller"
	I0507 19:55:40.579654    5068 command_runner.go:130] ! I0507 19:33:45.248590       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0507 19:55:40.579654    5068 command_runner.go:130] ! I0507 19:33:45.248600       1 shared_informer.go:313] Waiting for caches to sync for job
	I0507 19:55:40.579694    5068 command_runner.go:130] ! I0507 19:33:45.403516       1 controllermanager.go:759] "Started controller" controller="persistentvolume-protection-controller"
	I0507 19:55:40.579694    5068 command_runner.go:130] ! I0507 19:33:45.403864       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0507 19:55:40.579694    5068 command_runner.go:130] ! I0507 19:33:45.404124       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0507 19:55:40.579694    5068 command_runner.go:130] ! I0507 19:33:45.547079       1 controllermanager.go:759] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0507 19:55:40.579766    5068 command_runner.go:130] ! I0507 19:33:45.547101       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0507 19:55:40.579798    5068 command_runner.go:130] ! I0507 19:33:45.547218       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0507 19:55:40.579798    5068 command_runner.go:130] ! I0507 19:33:45.547228       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0507 19:55:40.579798    5068 command_runner.go:130] ! I0507 19:33:45.695293       1 controllermanager.go:759] "Started controller" controller="cronjob-controller"
	I0507 19:55:40.579798    5068 command_runner.go:130] ! I0507 19:33:45.695376       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0507 19:55:40.579848    5068 command_runner.go:130] ! I0507 19:33:45.695385       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0507 19:55:40.579848    5068 command_runner.go:130] ! I0507 19:33:45.842519       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0507 19:55:40.579848    5068 command_runner.go:130] ! I0507 19:33:45.843201       1 controllermanager.go:759] "Started controller" controller="node-lifecycle-controller"
	I0507 19:55:40.579909    5068 command_runner.go:130] ! I0507 19:33:45.843464       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0507 19:55:40.579909    5068 command_runner.go:130] ! I0507 19:33:45.843612       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0507 19:55:40.579909    5068 command_runner.go:130] ! I0507 19:33:45.843670       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0507 19:55:40.579957    5068 command_runner.go:130] ! I0507 19:33:45.994121       1 controllermanager.go:759] "Started controller" controller="persistentvolume-expander-controller"
	I0507 19:55:40.579957    5068 command_runner.go:130] ! I0507 19:33:45.994195       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0507 19:55:40.579957    5068 command_runner.go:130] ! I0507 19:33:45.994559       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0507 19:55:40.579957    5068 command_runner.go:130] ! I0507 19:33:46.142670       1 controllermanager.go:759] "Started controller" controller="ephemeral-volume-controller"
	I0507 19:55:40.580039    5068 command_runner.go:130] ! I0507 19:33:46.142767       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0507 19:55:40.580039    5068 command_runner.go:130] ! I0507 19:33:46.142777       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0507 19:55:40.580069    5068 command_runner.go:130] ! I0507 19:33:46.292842       1 controllermanager.go:759] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0507 19:55:40.580069    5068 command_runner.go:130] ! I0507 19:33:46.292937       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0507 19:55:40.580069    5068 command_runner.go:130] ! I0507 19:33:46.292979       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0507 19:55:40.580129    5068 command_runner.go:130] ! I0507 19:33:46.293532       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0507 19:55:40.580129    5068 command_runner.go:130] ! I0507 19:33:46.443522       1 controllermanager.go:759] "Started controller" controller="endpoints-controller"
	I0507 19:55:40.580178    5068 command_runner.go:130] ! I0507 19:33:46.443783       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0507 19:55:40.580178    5068 command_runner.go:130] ! I0507 19:33:46.443796       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0507 19:55:40.580178    5068 command_runner.go:130] ! I0507 19:33:46.639478       1 controllermanager.go:759] "Started controller" controller="disruption-controller"
	I0507 19:55:40.580178    5068 command_runner.go:130] ! I0507 19:33:46.639695       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0507 19:55:40.580178    5068 command_runner.go:130] ! I0507 19:33:46.640237       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0507 19:55:40.580241    5068 command_runner.go:130] ! I0507 19:33:46.640384       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0507 19:55:40.580271    5068 command_runner.go:130] ! I0507 19:33:46.802195       1 controllermanager.go:759] "Started controller" controller="ttl-after-finished-controller"
	I0507 19:55:40.580271    5068 command_runner.go:130] ! I0507 19:33:46.802321       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0507 19:55:40.580271    5068 command_runner.go:130] ! I0507 19:33:46.802333       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0507 19:55:40.580271    5068 command_runner.go:130] ! I0507 19:33:46.839302       1 controllermanager.go:759] "Started controller" controller="taint-eviction-controller"
	I0507 19:55:40.580271    5068 command_runner.go:130] ! I0507 19:33:46.839419       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0507 19:55:40.580335    5068 command_runner.go:130] ! I0507 19:33:46.839439       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0507 19:55:40.580361    5068 command_runner.go:130] ! I0507 19:33:46.839547       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0507 19:55:40.580382    5068 command_runner.go:130] ! I0507 19:33:46.995880       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0507 19:55:40.580382    5068 command_runner.go:130] ! I0507 19:33:46.996105       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0507 19:55:40.580421    5068 command_runner.go:130] ! I0507 19:33:46.996124       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0507 19:55:40.580421    5068 command_runner.go:130] ! I0507 19:33:46.996192       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0507 19:55:40.580458    5068 command_runner.go:130] ! I0507 19:33:46.996213       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0507 19:55:40.580458    5068 command_runner.go:130] ! I0507 19:33:46.996264       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0507 19:55:40.580497    5068 command_runner.go:130] ! I0507 19:33:46.996515       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0507 19:55:40.580534    5068 command_runner.go:130] ! I0507 19:33:46.997757       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0507 19:55:40.580572    5068 command_runner.go:130] ! I0507 19:33:46.997789       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0507 19:55:40.580572    5068 command_runner.go:130] ! I0507 19:33:46.998232       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0507 19:55:40.580605    5068 command_runner.go:130] ! I0507 19:33:46.998256       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0507 19:55:40.580642    5068 command_runner.go:130] ! I0507 19:33:46.998461       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0507 19:55:40.580642    5068 command_runner.go:130] ! I0507 19:33:46.998581       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0507 19:55:40.580681    5068 command_runner.go:130] ! I0507 19:33:47.144659       1 controllermanager.go:759] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0507 19:55:40.580681    5068 command_runner.go:130] ! I0507 19:33:47.144787       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0507 19:55:40.580718    5068 command_runner.go:130] ! I0507 19:33:47.144840       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0507 19:55:40.580718    5068 command_runner.go:130] ! I0507 19:33:47.188132       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0507 19:55:40.580751    5068 command_runner.go:130] ! I0507 19:33:47.188178       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0507 19:55:40.580751    5068 command_runner.go:130] ! I0507 19:33:47.188191       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0507 19:55:40.580788    5068 command_runner.go:130] ! I0507 19:33:47.238083       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0507 19:55:40.580788    5068 command_runner.go:130] ! I0507 19:33:47.238123       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0507 19:55:40.580825    5068 command_runner.go:130] ! I0507 19:33:47.394585       1 controllermanager.go:759] "Started controller" controller="token-cleaner-controller"
	I0507 19:55:40.580825    5068 command_runner.go:130] ! I0507 19:33:47.394777       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0507 19:55:40.580825    5068 command_runner.go:130] ! I0507 19:33:47.394803       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0507 19:55:40.580825    5068 command_runner.go:130] ! I0507 19:33:47.394838       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0507 19:55:40.580825    5068 command_runner.go:130] ! I0507 19:33:57.452785       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0507 19:55:40.580897    5068 command_runner.go:130] ! I0507 19:33:57.452897       1 controllermanager.go:759] "Started controller" controller="node-ipam-controller"
	I0507 19:55:40.580924    5068 command_runner.go:130] ! I0507 19:33:57.453626       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0507 19:55:40.580924    5068 command_runner.go:130] ! I0507 19:33:57.453826       1 shared_informer.go:313] Waiting for caches to sync for node
	I0507 19:55:40.580924    5068 command_runner.go:130] ! I0507 19:33:57.483145       1 controllermanager.go:759] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0507 19:55:40.580957    5068 command_runner.go:130] ! I0507 19:33:57.483422       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0507 19:55:40.580957    5068 command_runner.go:130] ! I0507 19:33:57.493863       1 controllermanager.go:759] "Started controller" controller="endpointslice-controller"
	I0507 19:55:40.580995    5068 command_runner.go:130] ! I0507 19:33:57.494296       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0507 19:55:40.580995    5068 command_runner.go:130] ! I0507 19:33:57.494585       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0507 19:55:40.581026    5068 command_runner.go:130] ! I0507 19:33:57.506181       1 controllermanager.go:759] "Started controller" controller="replicaset-controller"
	I0507 19:55:40.581026    5068 command_runner.go:130] ! I0507 19:33:57.506211       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0507 19:55:40.581026    5068 command_runner.go:130] ! I0507 19:33:57.506219       1 controllermanager.go:737] "Warning: skipping controller" controller="node-route-controller"
	I0507 19:55:40.581083    5068 command_runner.go:130] ! I0507 19:33:57.506448       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0507 19:55:40.581083    5068 command_runner.go:130] ! I0507 19:33:57.506471       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0507 19:55:40.581119    5068 command_runner.go:130] ! E0507 19:33:57.508667       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0507 19:55:40.581119    5068 command_runner.go:130] ! I0507 19:33:57.508863       1 controllermanager.go:737] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0507 19:55:40.581147    5068 command_runner.go:130] ! I0507 19:33:57.536071       1 controllermanager.go:759] "Started controller" controller="namespace-controller"
	I0507 19:55:40.581147    5068 command_runner.go:130] ! I0507 19:33:57.536238       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0507 19:55:40.581147    5068 command_runner.go:130] ! I0507 19:33:57.536958       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0507 19:55:40.581147    5068 command_runner.go:130] ! I0507 19:33:57.552316       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0507 19:55:40.581147    5068 command_runner.go:130] ! I0507 19:33:57.552368       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0507 19:55:40.581147    5068 command_runner.go:130] ! I0507 19:33:57.552583       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0507 19:55:40.581147    5068 command_runner.go:130] ! I0507 19:33:57.552830       1 controllermanager.go:759] "Started controller" controller="garbage-collector-controller"
	I0507 19:55:40.581147    5068 command_runner.go:130] ! I0507 19:33:57.602799       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0507 19:55:40.581147    5068 command_runner.go:130] ! I0507 19:33:57.604255       1 shared_informer.go:320] Caches are synced for expand
	I0507 19:55:40.581147    5068 command_runner.go:130] ! I0507 19:33:57.604567       1 shared_informer.go:320] Caches are synced for cronjob
	I0507 19:55:40.581147    5068 command_runner.go:130] ! I0507 19:33:57.604710       1 shared_informer.go:320] Caches are synced for PV protection
	I0507 19:55:40.581147    5068 command_runner.go:130] ! I0507 19:33:57.616713       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-600000\" does not exist"
	I0507 19:55:40.581147    5068 command_runner.go:130] ! I0507 19:33:57.620217       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0507 19:55:40.581147    5068 command_runner.go:130] ! I0507 19:33:57.625534       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0507 19:55:40.581147    5068 command_runner.go:130] ! I0507 19:33:57.637418       1 shared_informer.go:320] Caches are synced for namespace
	I0507 19:55:40.581147    5068 command_runner.go:130] ! I0507 19:33:57.640979       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0507 19:55:40.581147    5068 command_runner.go:130] ! I0507 19:33:57.643690       1 shared_informer.go:320] Caches are synced for ephemeral
	I0507 19:55:40.581147    5068 command_runner.go:130] ! I0507 19:33:57.643962       1 shared_informer.go:320] Caches are synced for crt configmap
	I0507 19:55:40.581147    5068 command_runner.go:130] ! I0507 19:33:57.643944       1 shared_informer.go:320] Caches are synced for endpoint
	I0507 19:55:40.581147    5068 command_runner.go:130] ! I0507 19:33:57.645645       1 shared_informer.go:320] Caches are synced for PVC protection
	I0507 19:55:40.581147    5068 command_runner.go:130] ! I0507 19:33:57.650051       1 shared_informer.go:320] Caches are synced for job
	I0507 19:55:40.581147    5068 command_runner.go:130] ! I0507 19:33:57.654615       1 shared_informer.go:320] Caches are synced for node
	I0507 19:55:40.581147    5068 command_runner.go:130] ! I0507 19:33:57.654828       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0507 19:55:40.581147    5068 command_runner.go:130] ! I0507 19:33:57.654976       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0507 19:55:40.581147    5068 command_runner.go:130] ! I0507 19:33:57.658548       1 shared_informer.go:320] Caches are synced for stateful set
	I0507 19:55:40.581147    5068 command_runner.go:130] ! I0507 19:33:57.658557       1 shared_informer.go:320] Caches are synced for TTL
	I0507 19:55:40.581147    5068 command_runner.go:130] ! I0507 19:33:57.658578       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0507 19:55:40.581147    5068 command_runner.go:130] ! I0507 19:33:57.660814       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0507 19:55:40.581147    5068 command_runner.go:130] ! I0507 19:33:57.662570       1 shared_informer.go:320] Caches are synced for GC
	I0507 19:55:40.581147    5068 command_runner.go:130] ! I0507 19:33:57.666627       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0507 19:55:40.581147    5068 command_runner.go:130] ! I0507 19:33:57.682592       1 shared_informer.go:320] Caches are synced for service account
	I0507 19:55:40.581147    5068 command_runner.go:130] ! I0507 19:33:57.683797       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0507 19:55:40.581147    5068 command_runner.go:130] ! I0507 19:33:57.686866       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-600000" podCIDRs=["10.244.0.0/24"]
	I0507 19:55:40.581147    5068 command_runner.go:130] ! I0507 19:33:57.688271       1 shared_informer.go:320] Caches are synced for persistent volume
	I0507 19:55:40.581147    5068 command_runner.go:130] ! I0507 19:33:57.688450       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0507 19:55:40.581147    5068 command_runner.go:130] ! I0507 19:33:57.693833       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0507 19:55:40.581147    5068 command_runner.go:130] ! I0507 19:33:57.695065       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0507 19:55:40.581147    5068 command_runner.go:130] ! I0507 19:33:57.696405       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0507 19:55:40.581147    5068 command_runner.go:130] ! I0507 19:33:57.696588       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0507 19:55:40.581147    5068 command_runner.go:130] ! I0507 19:33:57.699644       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0507 19:55:40.581147    5068 command_runner.go:130] ! I0507 19:33:57.700059       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0507 19:55:40.581147    5068 command_runner.go:130] ! I0507 19:33:57.700324       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0507 19:55:40.581672    5068 command_runner.go:130] ! I0507 19:33:57.703629       1 shared_informer.go:320] Caches are synced for daemon sets
	I0507 19:55:40.581672    5068 command_runner.go:130] ! I0507 19:33:57.710906       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0507 19:55:40.581711    5068 command_runner.go:130] ! I0507 19:33:57.744541       1 shared_informer.go:320] Caches are synced for HPA
	I0507 19:55:40.581711    5068 command_runner.go:130] ! I0507 19:33:57.744580       1 shared_informer.go:320] Caches are synced for taint
	I0507 19:55:40.581757    5068 command_runner.go:130] ! I0507 19:33:57.744652       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0507 19:55:40.581757    5068 command_runner.go:130] ! I0507 19:33:57.744737       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-600000"
	I0507 19:55:40.581796    5068 command_runner.go:130] ! I0507 19:33:57.744768       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0507 19:55:40.581796    5068 command_runner.go:130] ! I0507 19:33:57.764904       1 shared_informer.go:320] Caches are synced for resource quota
	I0507 19:55:40.581796    5068 command_runner.go:130] ! I0507 19:33:57.793156       1 shared_informer.go:320] Caches are synced for deployment
	I0507 19:55:40.581842    5068 command_runner.go:130] ! I0507 19:33:57.806522       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0507 19:55:40.581842    5068 command_runner.go:130] ! I0507 19:33:57.841338       1 shared_informer.go:320] Caches are synced for disruption
	I0507 19:55:40.581842    5068 command_runner.go:130] ! I0507 19:33:57.848178       1 shared_informer.go:320] Caches are synced for attach detach
	I0507 19:55:40.581880    5068 command_runner.go:130] ! I0507 19:33:57.857076       1 shared_informer.go:320] Caches are synced for resource quota
	I0507 19:55:40.581880    5068 command_runner.go:130] ! I0507 19:33:58.320735       1 shared_informer.go:320] Caches are synced for garbage collector
	I0507 19:55:40.581880    5068 command_runner.go:130] ! I0507 19:33:58.353360       1 shared_informer.go:320] Caches are synced for garbage collector
	I0507 19:55:40.581925    5068 command_runner.go:130] ! I0507 19:33:58.353634       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0507 19:55:40.581925    5068 command_runner.go:130] ! I0507 19:33:58.648491       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="254.239192ms"
	I0507 19:55:40.581963    5068 command_runner.go:130] ! I0507 19:33:58.768889       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="120.227252ms"
	I0507 19:55:40.581963    5068 command_runner.go:130] ! I0507 19:33:58.768980       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="57.703µs"
	I0507 19:55:40.582008    5068 command_runner.go:130] ! I0507 19:33:59.385629       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="74.4593ms"
	I0507 19:55:40.582048    5068 command_runner.go:130] ! I0507 19:33:59.400563       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="14.850657ms"
	I0507 19:55:40.582048    5068 command_runner.go:130] ! I0507 19:33:59.442803       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="42.020809ms"
	I0507 19:55:40.582093    5068 command_runner.go:130] ! I0507 19:33:59.442937       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="66.204µs"
	I0507 19:55:40.582093    5068 command_runner.go:130] ! I0507 19:34:10.730717       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="75.405µs"
	I0507 19:55:40.582131    5068 command_runner.go:130] ! I0507 19:34:10.778543       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="100.807µs"
	I0507 19:55:40.582131    5068 command_runner.go:130] ! I0507 19:34:12.746728       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0507 19:55:40.582174    5068 command_runner.go:130] ! I0507 19:34:12.843910       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="71.905µs"
	I0507 19:55:40.582174    5068 command_runner.go:130] ! I0507 19:34:12.916087       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="21.128233ms"
	I0507 19:55:40.582213    5068 command_runner.go:130] ! I0507 19:34:12.920189       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="131.008µs"
	I0507 19:55:40.582213    5068 command_runner.go:130] ! I0507 19:36:39.748714       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-600000-m02\" does not exist"
	I0507 19:55:40.582257    5068 command_runner.go:130] ! I0507 19:36:39.768095       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-600000-m02" podCIDRs=["10.244.1.0/24"]
	I0507 19:55:40.582295    5068 command_runner.go:130] ! I0507 19:36:42.771386       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-600000-m02"
	I0507 19:55:40.582295    5068 command_runner.go:130] ! I0507 19:36:59.833069       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-600000-m02"
	I0507 19:55:40.582340    5068 command_runner.go:130] ! I0507 19:37:23.261574       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="80.822997ms"
	I0507 19:55:40.582378    5068 command_runner.go:130] ! I0507 19:37:23.275925       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.242181ms"
	I0507 19:55:40.582378    5068 command_runner.go:130] ! I0507 19:37:23.277411       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.303µs"
	I0507 19:55:40.582417    5068 command_runner.go:130] ! I0507 19:37:25.468822       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.984518ms"
	I0507 19:55:40.582417    5068 command_runner.go:130] ! I0507 19:37:25.471412       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="2.381856ms"
	I0507 19:55:40.582455    5068 command_runner.go:130] ! I0507 19:37:26.028543       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.755438ms"
	I0507 19:55:40.582494    5068 command_runner.go:130] ! I0507 19:37:26.029180       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="91.706µs"
	I0507 19:55:40.582494    5068 command_runner.go:130] ! I0507 19:40:53.034791       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-600000-m02"
	I0507 19:55:40.582534    5068 command_runner.go:130] ! I0507 19:40:53.035911       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-600000-m03\" does not exist"
	I0507 19:55:40.582534    5068 command_runner.go:130] ! I0507 19:40:53.048242       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-600000-m03" podCIDRs=["10.244.2.0/24"]
	I0507 19:55:40.582573    5068 command_runner.go:130] ! I0507 19:40:57.837925       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-600000-m03"
	I0507 19:55:40.582573    5068 command_runner.go:130] ! I0507 19:41:13.622605       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-600000-m02"
	I0507 19:55:40.582573    5068 command_runner.go:130] ! I0507 19:48:02.948548       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-600000-m02"
	I0507 19:55:40.582615    5068 command_runner.go:130] ! I0507 19:50:20.695158       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-600000-m02"
	I0507 19:55:40.582615    5068 command_runner.go:130] ! I0507 19:50:25.866050       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-600000-m03\" does not exist"
	I0507 19:55:40.582615    5068 command_runner.go:130] ! I0507 19:50:25.866126       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-600000-m02"
	I0507 19:55:40.582615    5068 command_runner.go:130] ! I0507 19:50:25.887459       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-600000-m03" podCIDRs=["10.244.3.0/24"]
	I0507 19:55:40.582770    5068 command_runner.go:130] ! I0507 19:50:31.631900       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-600000-m02"
	I0507 19:55:40.582791    5068 command_runner.go:130] ! I0507 19:51:58.074557       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-600000-m02"
	I0507 19:55:40.602113    5068 logs.go:123] Gathering logs for kindnet [29b5cae0b8f1] ...
	I0507 19:55:40.603120    5068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29b5cae0b8f1"
	I0507 19:55:40.634954    5068 command_runner.go:130] ! I0507 19:54:35.653367       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0507 19:55:40.634954    5068 command_runner.go:130] ! I0507 19:54:35.653969       1 main.go:107] hostIP = 172.19.135.22
	I0507 19:55:40.634954    5068 command_runner.go:130] ! podIP = 172.19.143.74
	I0507 19:55:40.634954    5068 command_runner.go:130] ! W0507 19:54:35.653976       1 main.go:109] hostIP(= "172.19.135.22") != podIP(= "172.19.143.74") but must be running with host network: 
	I0507 19:55:40.634954    5068 command_runner.go:130] ! I0507 19:54:35.655401       1 main.go:116] setting mtu 1500 for CNI 
	I0507 19:55:40.634954    5068 command_runner.go:130] ! I0507 19:54:35.655532       1 main.go:146] kindnetd IP family: "ipv4"
	I0507 19:55:40.634954    5068 command_runner.go:130] ! I0507 19:54:35.655617       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0507 19:55:40.634954    5068 command_runner.go:130] ! I0507 19:55:05.983217       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0507 19:55:40.634954    5068 command_runner.go:130] ! I0507 19:55:06.001182       1 main.go:223] Handling node with IPs: map[172.19.135.22:{}]
	I0507 19:55:40.634954    5068 command_runner.go:130] ! I0507 19:55:06.001219       1 main.go:227] handling current node
	I0507 19:55:40.634954    5068 command_runner.go:130] ! I0507 19:55:06.001493       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:40.634954    5068 command_runner.go:130] ! I0507 19:55:06.001598       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:40.634954    5068 command_runner.go:130] ! I0507 19:55:06.001955       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.19.143.144 Flags: [] Table: 0} 
	I0507 19:55:40.634954    5068 command_runner.go:130] ! I0507 19:55:06.036933       1 main.go:223] Handling node with IPs: map[172.19.129.4:{}]
	I0507 19:55:40.634954    5068 command_runner.go:130] ! I0507 19:55:06.037052       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.3.0/24] 
	I0507 19:55:40.634954    5068 command_runner.go:130] ! I0507 19:55:06.037122       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.19.129.4 Flags: [] Table: 0} 
	I0507 19:55:40.634954    5068 command_runner.go:130] ! I0507 19:55:16.046470       1 main.go:223] Handling node with IPs: map[172.19.135.22:{}]
	I0507 19:55:40.634954    5068 command_runner.go:130] ! I0507 19:55:16.046556       1 main.go:227] handling current node
	I0507 19:55:40.634954    5068 command_runner.go:130] ! I0507 19:55:16.046569       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:40.634954    5068 command_runner.go:130] ! I0507 19:55:16.046577       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:40.634954    5068 command_runner.go:130] ! I0507 19:55:16.046933       1 main.go:223] Handling node with IPs: map[172.19.129.4:{}]
	I0507 19:55:40.634954    5068 command_runner.go:130] ! I0507 19:55:16.046957       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.3.0/24] 
	I0507 19:55:40.635622    5068 command_runner.go:130] ! I0507 19:55:26.058109       1 main.go:223] Handling node with IPs: map[172.19.135.22:{}]
	I0507 19:55:40.635622    5068 command_runner.go:130] ! I0507 19:55:26.058254       1 main.go:227] handling current node
	I0507 19:55:40.635622    5068 command_runner.go:130] ! I0507 19:55:26.058265       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:40.635622    5068 command_runner.go:130] ! I0507 19:55:26.058271       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:40.635694    5068 command_runner.go:130] ! I0507 19:55:26.058667       1 main.go:223] Handling node with IPs: map[172.19.129.4:{}]
	I0507 19:55:40.635694    5068 command_runner.go:130] ! I0507 19:55:26.058697       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.3.0/24] 
	I0507 19:55:40.635694    5068 command_runner.go:130] ! I0507 19:55:36.070650       1 main.go:223] Handling node with IPs: map[172.19.135.22:{}]
	I0507 19:55:40.635758    5068 command_runner.go:130] ! I0507 19:55:36.070781       1 main.go:227] handling current node
	I0507 19:55:40.635758    5068 command_runner.go:130] ! I0507 19:55:36.070793       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:40.635758    5068 command_runner.go:130] ! I0507 19:55:36.070834       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:40.635758    5068 command_runner.go:130] ! I0507 19:55:36.071124       1 main.go:223] Handling node with IPs: map[172.19.129.4:{}]
	I0507 19:55:40.635758    5068 command_runner.go:130] ! I0507 19:55:36.071149       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.3.0/24] 
	I0507 19:55:40.640154    5068 logs.go:123] Gathering logs for container status ...
	I0507 19:55:40.640214    5068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 19:55:40.697608    5068 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0507 19:55:40.697707    5068 command_runner.go:130] > 78ecb8cdfd06c       8c811b4aec35f                                                                                         2 seconds ago        Running             busybox                   1                   f8dc35309168f       busybox-fc5497c4f-gcqlv
	I0507 19:55:40.697795    5068 command_runner.go:130] > d27627c198085       cbb01a7bd410d                                                                                         2 seconds ago        Running             coredns                   1                   56c438bec1777       coredns-7db6d8ff4d-5j966
	I0507 19:55:40.697795    5068 command_runner.go:130] > 4c93a69b2eee4       6e38f40d628db                                                                                         24 seconds ago       Running             storage-provisioner       2                   09d2fda974adf       storage-provisioner
	I0507 19:55:40.697884    5068 command_runner.go:130] > 29b5cae0b8f14       4950bb10b3f87                                                                                         About a minute ago   Running             kindnet-cni               1                   857f6b5630910       kindnet-zw4r9
	I0507 19:55:40.697884    5068 command_runner.go:130] > 5255a972ff6ce       a0bf559e280cf                                                                                         About a minute ago   Running             kube-proxy                1                   deb171c003562       kube-proxy-c9gw5
	I0507 19:55:40.697884    5068 command_runner.go:130] > d1e3e4629bc4a       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   09d2fda974adf       storage-provisioner
	I0507 19:55:40.698031    5068 command_runner.go:130] > 7c95e3addc4b8       c42f13656d0b2                                                                                         About a minute ago   Running             kube-apiserver            0                   fec63580ff266       kube-apiserver-multinode-600000
	I0507 19:55:40.698031    5068 command_runner.go:130] > ac320a872e77c       3861cfcd7c04c                                                                                         About a minute ago   Running             etcd                      0                   c666fba0d0753       etcd-multinode-600000
	I0507 19:55:40.698031    5068 command_runner.go:130] > 922d1e2b87454       c7aad43836fa5                                                                                         About a minute ago   Running             kube-controller-manager   1                   5c37290307d14       kube-controller-manager-multinode-600000
	I0507 19:55:40.698135    5068 command_runner.go:130] > 45341720d5be3       259c8277fcbbc                                                                                         About a minute ago   Running             kube-scheduler            1                   89c8a2313bcaf       kube-scheduler-multinode-600000
	I0507 19:55:40.698331    5068 command_runner.go:130] > 66301c2be7060       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   18 minutes ago       Exited              busybox                   0                   4afb10dc8b115       busybox-fc5497c4f-gcqlv
	I0507 19:55:40.698880    5068 command_runner.go:130] > 9550b237d8d7b       cbb01a7bd410d                                                                                         21 minutes ago       Exited              coredns                   0                   99af61c6e282a       coredns-7db6d8ff4d-5j966
	I0507 19:55:40.698973    5068 command_runner.go:130] > 2d49ad078ed35       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              21 minutes ago       Exited              kindnet-cni               0                   58ebd877d77fb       kindnet-zw4r9
	I0507 19:55:40.699045    5068 command_runner.go:130] > aa9692c1fbd3b       a0bf559e280cf                                                                                         21 minutes ago       Exited              kube-proxy                0                   70cff02905e8f       kube-proxy-c9gw5
	I0507 19:55:40.699143    5068 command_runner.go:130] > 7cefdac2050fa       259c8277fcbbc                                                                                         22 minutes ago       Exited              kube-scheduler            0                   75f27faec2ed6       kube-scheduler-multinode-600000
	I0507 19:55:40.699219    5068 command_runner.go:130] > 3067f16e2e380       c7aad43836fa5                                                                                         22 minutes ago       Exited              kube-controller-manager   0                   af16a92d7c1cc       kube-controller-manager-multinode-600000
	I0507 19:55:40.703711    5068 logs.go:123] Gathering logs for Docker ...
	I0507 19:55:40.703743    5068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 19:55:40.734495    5068 command_runner.go:130] > May 07 19:53:11 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0507 19:55:40.734495    5068 command_runner.go:130] > May 07 19:53:11 minikube cri-dockerd[223]: time="2024-05-07T19:53:11Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0507 19:55:40.734596    5068 command_runner.go:130] > May 07 19:53:11 minikube cri-dockerd[223]: time="2024-05-07T19:53:11Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0507 19:55:40.734596    5068 command_runner.go:130] > May 07 19:53:11 minikube cri-dockerd[223]: time="2024-05-07T19:53:11Z" level=info msg="Start docker client with request timeout 0s"
	I0507 19:55:40.734596    5068 command_runner.go:130] > May 07 19:53:11 minikube cri-dockerd[223]: time="2024-05-07T19:53:11Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0507 19:55:40.734596    5068 command_runner.go:130] > May 07 19:53:11 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0507 19:55:40.734691    5068 command_runner.go:130] > May 07 19:53:11 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0507 19:55:40.734691    5068 command_runner.go:130] > May 07 19:53:11 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0507 19:55:40.734691    5068 command_runner.go:130] > May 07 19:53:13 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0507 19:55:40.734691    5068 command_runner.go:130] > May 07 19:53:13 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0507 19:55:40.734691    5068 command_runner.go:130] > May 07 19:53:14 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0507 19:55:40.734780    5068 command_runner.go:130] > May 07 19:53:14 minikube cri-dockerd[420]: time="2024-05-07T19:53:14Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0507 19:55:40.734780    5068 command_runner.go:130] > May 07 19:53:14 minikube cri-dockerd[420]: time="2024-05-07T19:53:14Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0507 19:55:40.734780    5068 command_runner.go:130] > May 07 19:53:14 minikube cri-dockerd[420]: time="2024-05-07T19:53:14Z" level=info msg="Start docker client with request timeout 0s"
	I0507 19:55:40.734780    5068 command_runner.go:130] > May 07 19:53:14 minikube cri-dockerd[420]: time="2024-05-07T19:53:14Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0507 19:55:40.734912    5068 command_runner.go:130] > May 07 19:53:14 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0507 19:55:40.734912    5068 command_runner.go:130] > May 07 19:53:14 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0507 19:55:40.734975    5068 command_runner.go:130] > May 07 19:53:14 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0507 19:55:40.735015    5068 command_runner.go:130] > May 07 19:53:16 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0507 19:55:40.735015    5068 command_runner.go:130] > May 07 19:53:16 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0507 19:55:40.735015    5068 command_runner.go:130] > May 07 19:53:16 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0507 19:55:40.735108    5068 command_runner.go:130] > May 07 19:53:16 minikube cri-dockerd[428]: time="2024-05-07T19:53:16Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0507 19:55:40.735108    5068 command_runner.go:130] > May 07 19:53:16 minikube cri-dockerd[428]: time="2024-05-07T19:53:16Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0507 19:55:40.735108    5068 command_runner.go:130] > May 07 19:53:16 minikube cri-dockerd[428]: time="2024-05-07T19:53:16Z" level=info msg="Start docker client with request timeout 0s"
	I0507 19:55:40.735108    5068 command_runner.go:130] > May 07 19:53:16 minikube cri-dockerd[428]: time="2024-05-07T19:53:16Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0507 19:55:40.735211    5068 command_runner.go:130] > May 07 19:53:16 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0507 19:55:40.735211    5068 command_runner.go:130] > May 07 19:53:16 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0507 19:55:40.735211    5068 command_runner.go:130] > May 07 19:53:16 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0507 19:55:40.735319    5068 command_runner.go:130] > May 07 19:53:18 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0507 19:55:40.735319    5068 command_runner.go:130] > May 07 19:53:18 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0507 19:55:40.735393    5068 command_runner.go:130] > May 07 19:53:18 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0507 19:55:40.735393    5068 command_runner.go:130] > May 07 19:53:18 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0507 19:55:40.735432    5068 command_runner.go:130] > May 07 19:53:18 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0507 19:55:40.735512    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 systemd[1]: Starting Docker Application Container Engine...
	I0507 19:55:40.735512    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[656]: time="2024-05-07T19:53:56.261608662Z" level=info msg="Starting up"
	I0507 19:55:40.735512    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[656]: time="2024-05-07T19:53:56.264255181Z" level=info msg="containerd not running, starting managed containerd"
	I0507 19:55:40.735584    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[656]: time="2024-05-07T19:53:56.267798843Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=662
	I0507 19:55:40.735640    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.292663096Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0507 19:55:40.735640    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.316810753Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0507 19:55:40.735735    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.316928685Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0507 19:55:40.735735    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.317059021Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0507 19:55:40.735806    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.317074525Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0507 19:55:40.735863    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.317778516Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0507 19:55:40.735863    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.317870241Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0507 19:55:40.735863    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.318053591Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0507 19:55:40.735969    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.318181025Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0507 19:55:40.735969    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.318200831Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0507 19:55:40.736068    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.318211033Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0507 19:55:40.736068    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.318648452Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0507 19:55:40.736068    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.319370548Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0507 19:55:40.736167    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.322128697Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0507 19:55:40.736167    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.322287440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0507 19:55:40.736265    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.322423477Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0507 19:55:40.736363    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.322511301Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0507 19:55:40.736363    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.323103462Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0507 19:55:40.736363    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.323264406Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0507 19:55:40.736363    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.323281010Z" level=info msg="metadata content store policy set" policy=shared
	I0507 19:55:40.736463    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.329512102Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0507 19:55:40.736463    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.329607228Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0507 19:55:40.736463    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.329699453Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0507 19:55:40.736560    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.329991833Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0507 19:55:40.736560    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.330149675Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0507 19:55:40.736560    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.330391841Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0507 19:55:40.736660    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.331279682Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0507 19:55:40.736660    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.331558958Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0507 19:55:40.736660    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.331719502Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0507 19:55:40.736759    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.331752511Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0507 19:55:40.736759    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.331780218Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0507 19:55:40.736759    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.331804825Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0507 19:55:40.736857    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.332099005Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0507 19:55:40.736857    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.332235742Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0507 19:55:40.736857    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.332267150Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0507 19:55:40.736949    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.332290657Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0507 19:55:40.736949    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.332323766Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0507 19:55:40.737047    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.332346572Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0507 19:55:40.737047    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.332381181Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0507 19:55:40.737047    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.332407189Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0507 19:55:40.737143    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.332431795Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0507 19:55:40.737143    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.332459103Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0507 19:55:40.737143    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.332481509Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0507 19:55:40.737243    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.332504615Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0507 19:55:40.737243    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.332528722Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0507 19:55:40.737243    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.332552728Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0507 19:55:40.737340    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.332576134Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0507 19:55:40.737340    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.332603642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0507 19:55:40.737340    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.332625548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0507 19:55:40.737439    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.332651055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0507 19:55:40.737439    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.332673961Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0507 19:55:40.737439    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.333069468Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0507 19:55:40.737536    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.333235413Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0507 19:55:40.737536    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.333383554Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0507 19:55:40.737536    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.333414662Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0507 19:55:40.737632    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.333616417Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0507 19:55:40.737632    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.333710943Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0507 19:55:40.737728    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.333725547Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0507 19:55:40.737826    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.333736349Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0507 19:55:40.737826    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.333796266Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0507 19:55:40.737826    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.333810170Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0507 19:55:40.737927    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.333876888Z" level=info msg="NRI interface is disabled by configuration."
	I0507 19:55:40.737927    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.334581479Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0507 19:55:40.737927    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.334799638Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0507 19:55:40.738024    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.335014597Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0507 19:55:40.738024    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.335347487Z" level=info msg="containerd successfully booted in 0.045275s"
	I0507 19:55:40.738024    5068 command_runner.go:130] > May 07 19:53:57 multinode-600000 dockerd[656]: time="2024-05-07T19:53:57.321187459Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0507 19:55:40.738120    5068 command_runner.go:130] > May 07 19:53:57 multinode-600000 dockerd[656]: time="2024-05-07T19:53:57.476287680Z" level=info msg="Loading containers: start."
	I0507 19:55:40.738120    5068 command_runner.go:130] > May 07 19:53:57 multinode-600000 dockerd[656]: time="2024-05-07T19:53:57.877079663Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0507 19:55:40.738120    5068 command_runner.go:130] > May 07 19:53:57 multinode-600000 dockerd[656]: time="2024-05-07T19:53:57.952570655Z" level=info msg="Loading containers: done."
	I0507 19:55:40.738219    5068 command_runner.go:130] > May 07 19:53:57 multinode-600000 dockerd[656]: time="2024-05-07T19:53:57.979382413Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0507 19:55:40.738219    5068 command_runner.go:130] > May 07 19:53:57 multinode-600000 dockerd[656]: time="2024-05-07T19:53:57.980260841Z" level=info msg="Daemon has completed initialization"
	I0507 19:55:40.738219    5068 command_runner.go:130] > May 07 19:53:58 multinode-600000 dockerd[656]: time="2024-05-07T19:53:58.031005949Z" level=info msg="API listen on [::]:2376"
	I0507 19:55:40.738219    5068 command_runner.go:130] > May 07 19:53:58 multinode-600000 systemd[1]: Started Docker Application Container Engine.
	I0507 19:55:40.738317    5068 command_runner.go:130] > May 07 19:53:58 multinode-600000 dockerd[656]: time="2024-05-07T19:53:58.031256476Z" level=info msg="API listen on /var/run/docker.sock"
	I0507 19:55:40.738317    5068 command_runner.go:130] > May 07 19:54:20 multinode-600000 systemd[1]: Stopping Docker Application Container Engine...
	I0507 19:55:40.738317    5068 command_runner.go:130] > May 07 19:54:20 multinode-600000 dockerd[656]: time="2024-05-07T19:54:20.774198260Z" level=info msg="Processing signal 'terminated'"
	I0507 19:55:40.738417    5068 command_runner.go:130] > May 07 19:54:20 multinode-600000 dockerd[656]: time="2024-05-07T19:54:20.776613097Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0507 19:55:40.738417    5068 command_runner.go:130] > May 07 19:54:20 multinode-600000 dockerd[656]: time="2024-05-07T19:54:20.776805608Z" level=info msg="Daemon shutdown complete"
	I0507 19:55:40.738417    5068 command_runner.go:130] > May 07 19:54:20 multinode-600000 dockerd[656]: time="2024-05-07T19:54:20.776895213Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0507 19:55:40.738518    5068 command_runner.go:130] > May 07 19:54:20 multinode-600000 dockerd[656]: time="2024-05-07T19:54:20.776925814Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0507 19:55:40.738518    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 systemd[1]: docker.service: Deactivated successfully.
	I0507 19:55:40.738518    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 systemd[1]: Stopped Docker Application Container Engine.
	I0507 19:55:40.738518    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 systemd[1]: Starting Docker Application Container Engine...
	I0507 19:55:40.738645    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1047]: time="2024-05-07T19:54:21.844803108Z" level=info msg="Starting up"
	I0507 19:55:40.738645    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1047]: time="2024-05-07T19:54:21.845592952Z" level=info msg="containerd not running, starting managed containerd"
	I0507 19:55:40.738645    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1047]: time="2024-05-07T19:54:21.846791420Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1053
	I0507 19:55:40.738747    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.877926981Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0507 19:55:40.738747    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.907006826Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0507 19:55:40.738747    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.907105131Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0507 19:55:40.738853    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.907143533Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0507 19:55:40.738853    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.907156034Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0507 19:55:40.738853    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.907277841Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0507 19:55:40.738957    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.907322244Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0507 19:55:40.739071    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.907477852Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0507 19:55:40.739071    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.907596759Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0507 19:55:40.739071    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.907616260Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0507 19:55:40.739170    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.907627661Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0507 19:55:40.739170    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.907658363Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0507 19:55:40.739170    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.907868674Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0507 19:55:40.739266    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.910668333Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0507 19:55:40.739266    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.910832542Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0507 19:55:40.739365    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.910974650Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0507 19:55:40.739738    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.911056755Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0507 19:55:40.739821    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.911079056Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0507 19:55:40.739821    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.911093757Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0507 19:55:40.739867    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.911103457Z" level=info msg="metadata content store policy set" policy=shared
	I0507 19:55:40.739962    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.911348471Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0507 19:55:40.739999    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.911388073Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0507 19:55:40.740067    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.911402674Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0507 19:55:40.740100    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.911415475Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0507 19:55:40.740100    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.911427076Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0507 19:55:40.740153    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.911464678Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0507 19:55:40.740153    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.911666589Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0507 19:55:40.740153    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.911840999Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0507 19:55:40.740153    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.911855900Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0507 19:55:40.740153    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.911868601Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0507 19:55:40.740153    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.911909603Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0507 19:55:40.740153    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.911924204Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0507 19:55:40.740153    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.911941405Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0507 19:55:40.740153    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.911955506Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0507 19:55:40.740153    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.911969406Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0507 19:55:40.740153    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.911987907Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0507 19:55:40.740153    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.912002408Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0507 19:55:40.740153    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.912014509Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0507 19:55:40.740153    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.912032910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0507 19:55:40.740153    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.912048811Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0507 19:55:40.740153    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.912061212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0507 19:55:40.740153    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.912073812Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0507 19:55:40.740153    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.912085813Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0507 19:55:40.740153    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.912098614Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0507 19:55:40.740153    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.912110514Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0507 19:55:40.740153    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.912123015Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0507 19:55:40.740153    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.912136916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0507 19:55:40.740153    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.912151617Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0507 19:55:40.740153    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.912162617Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0507 19:55:40.740153    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.912174218Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0507 19:55:40.740153    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.912189019Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0507 19:55:40.740153    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.912203420Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0507 19:55:40.740153    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.912223321Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0507 19:55:40.740692    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.912235321Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0507 19:55:40.740692    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.912245922Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0507 19:55:40.740692    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.912307726Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0507 19:55:40.740810    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.912877958Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0507 19:55:40.740810    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.912987064Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0507 19:55:40.740810    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.913005665Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0507 19:55:40.740810    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.913060968Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0507 19:55:40.740810    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.913148473Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0507 19:55:40.740810    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.913162874Z" level=info msg="NRI interface is disabled by configuration."
	I0507 19:55:40.740810    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.913518894Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0507 19:55:40.740810    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.913666902Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0507 19:55:40.740810    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.913836712Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0507 19:55:40.740810    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.913869014Z" level=info msg="containerd successfully booted in 0.037038s"
	I0507 19:55:40.740810    5068 command_runner.go:130] > May 07 19:54:22 multinode-600000 dockerd[1047]: time="2024-05-07T19:54:22.886642029Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0507 19:55:40.740810    5068 command_runner.go:130] > May 07 19:54:22 multinode-600000 dockerd[1047]: time="2024-05-07T19:54:22.917701485Z" level=info msg="Loading containers: start."
	I0507 19:55:40.740810    5068 command_runner.go:130] > May 07 19:54:23 multinode-600000 dockerd[1047]: time="2024-05-07T19:54:23.220079986Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0507 19:55:40.740810    5068 command_runner.go:130] > May 07 19:54:23 multinode-600000 dockerd[1047]: time="2024-05-07T19:54:23.297928389Z" level=info msg="Loading containers: done."
	I0507 19:55:40.740810    5068 command_runner.go:130] > May 07 19:54:23 multinode-600000 dockerd[1047]: time="2024-05-07T19:54:23.323426131Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0507 19:55:40.740810    5068 command_runner.go:130] > May 07 19:54:23 multinode-600000 dockerd[1047]: time="2024-05-07T19:54:23.323561939Z" level=info msg="Daemon has completed initialization"
	I0507 19:55:40.740810    5068 command_runner.go:130] > May 07 19:54:23 multinode-600000 dockerd[1047]: time="2024-05-07T19:54:23.371361642Z" level=info msg="API listen on /var/run/docker.sock"
	I0507 19:55:40.740810    5068 command_runner.go:130] > May 07 19:54:23 multinode-600000 dockerd[1047]: time="2024-05-07T19:54:23.371563053Z" level=info msg="API listen on [::]:2376"
	I0507 19:55:40.740810    5068 command_runner.go:130] > May 07 19:54:23 multinode-600000 systemd[1]: Started Docker Application Container Engine.
	I0507 19:55:40.740810    5068 command_runner.go:130] > May 07 19:54:24 multinode-600000 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0507 19:55:40.740810    5068 command_runner.go:130] > May 07 19:54:24 multinode-600000 cri-dockerd[1274]: time="2024-05-07T19:54:24Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0507 19:55:40.740810    5068 command_runner.go:130] > May 07 19:54:24 multinode-600000 cri-dockerd[1274]: time="2024-05-07T19:54:24Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0507 19:55:40.740810    5068 command_runner.go:130] > May 07 19:54:24 multinode-600000 cri-dockerd[1274]: time="2024-05-07T19:54:24Z" level=info msg="Start docker client with request timeout 0s"
	I0507 19:55:40.740810    5068 command_runner.go:130] > May 07 19:54:24 multinode-600000 cri-dockerd[1274]: time="2024-05-07T19:54:24Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0507 19:55:40.740810    5068 command_runner.go:130] > May 07 19:54:24 multinode-600000 cri-dockerd[1274]: time="2024-05-07T19:54:24Z" level=info msg="Loaded network plugin cni"
	I0507 19:55:40.740810    5068 command_runner.go:130] > May 07 19:54:24 multinode-600000 cri-dockerd[1274]: time="2024-05-07T19:54:24Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0507 19:55:40.740810    5068 command_runner.go:130] > May 07 19:54:24 multinode-600000 cri-dockerd[1274]: time="2024-05-07T19:54:24Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0507 19:55:40.740810    5068 command_runner.go:130] > May 07 19:54:24 multinode-600000 cri-dockerd[1274]: time="2024-05-07T19:54:24Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0507 19:55:40.741350    5068 command_runner.go:130] > May 07 19:54:24 multinode-600000 cri-dockerd[1274]: time="2024-05-07T19:54:24Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0507 19:55:40.741350    5068 command_runner.go:130] > May 07 19:54:24 multinode-600000 cri-dockerd[1274]: time="2024-05-07T19:54:24Z" level=info msg="Start cri-dockerd grpc backend"
	I0507 19:55:40.741441    5068 command_runner.go:130] > May 07 19:54:24 multinode-600000 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0507 19:55:40.741476    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 cri-dockerd[1274]: time="2024-05-07T19:54:28Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-5j966_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"99af61c6e282aa13c7209e469e5e354f24968796fc455a65fdf2e8611f760994\""
	I0507 19:55:40.741530    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 cri-dockerd[1274]: time="2024-05-07T19:54:28Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-fc5497c4f-gcqlv_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"4afb10dc8b11575b4eaa25a6b283141c6e029c9b44d3db3a69e4c934171b778e\""
	I0507 19:55:40.741530    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:29.542938073Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0507 19:55:40.741530    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:29.543010577Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0507 19:55:40.741530    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:29.543042179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:40.741530    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:29.543273292Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:40.741530    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 cri-dockerd[1274]: time="2024-05-07T19:54:29Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/89c8a2313bcaf38f51cf6dbb015e4b3d1ed11fef724fa2a2ecfd86165a93435e/resolv.conf as [nameserver 172.19.128.1]"
	I0507 19:55:40.741530    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:29.675480269Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0507 19:55:40.741530    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:29.675546573Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0507 19:55:40.741530    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:29.675564974Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:40.741530    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:29.684262666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:40.741530    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:29.725921222Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0507 19:55:40.741530    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:29.726068230Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0507 19:55:40.741530    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:29.726254241Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:40.741530    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:29.726575359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:40.741530    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:29.765272147Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0507 19:55:40.741530    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:29.765421056Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0507 19:55:40.741530    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:29.765494660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:40.741530    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:29.766208600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:40.741530    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 cri-dockerd[1274]: time="2024-05-07T19:54:29Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5c37290307d14956d6c732916d8f8cad779b8e57047c0b20cc5a97abeea21709/resolv.conf as [nameserver 172.19.128.1]"
	I0507 19:55:40.741530    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 cri-dockerd[1274]: time="2024-05-07T19:54:29Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c666fba0d07531cb6ff4a110f6538c8fbffaa474e8b7744eecd95c2c5449ac24/resolv.conf as [nameserver 172.19.128.1]"
	I0507 19:55:40.741530    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:29.943914850Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0507 19:55:40.741530    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:29.944218768Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0507 19:55:40.741530    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:29.944339474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:40.741530    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:29.944568887Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:40.741530    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 cri-dockerd[1274]: time="2024-05-07T19:54:29Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fec63580ff2669cca3046ae403d6a288bb279ca84766c91bd6464d8b2335c567/resolv.conf as [nameserver 172.19.128.1]"
	I0507 19:55:40.742113    5068 command_runner.go:130] > May 07 19:54:30 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:30.094912590Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0507 19:55:40.742151    5068 command_runner.go:130] > May 07 19:54:30 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:30.095972050Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0507 19:55:40.742151    5068 command_runner.go:130] > May 07 19:54:30 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:30.096703691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:40.742151    5068 command_runner.go:130] > May 07 19:54:30 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:30.098389387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:40.742151    5068 command_runner.go:130] > May 07 19:54:30 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:30.174777807Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0507 19:55:40.742151    5068 command_runner.go:130] > May 07 19:54:30 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:30.174917115Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0507 19:55:40.742151    5068 command_runner.go:130] > May 07 19:54:30 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:30.174947116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:40.742151    5068 command_runner.go:130] > May 07 19:54:30 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:30.175427944Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:40.742151    5068 command_runner.go:130] > May 07 19:54:30 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:30.179401568Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0507 19:55:40.742151    5068 command_runner.go:130] > May 07 19:54:30 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:30.180225415Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0507 19:55:40.742151    5068 command_runner.go:130] > May 07 19:54:30 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:30.180387824Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:40.742151    5068 command_runner.go:130] > May 07 19:54:30 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:30.180691941Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:40.742151    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 cri-dockerd[1274]: time="2024-05-07T19:54:33Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0507 19:55:40.742151    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:34.393545198Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0507 19:55:40.742151    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:34.393776611Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0507 19:55:40.742151    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:34.393798612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:40.742151    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:34.393904518Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:40.742151    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:34.429313521Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0507 19:55:40.742151    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:34.429355823Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0507 19:55:40.742151    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:34.429371924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:40.742151    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:34.429510732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:40.742151    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:34.450929143Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0507 19:55:40.742151    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:34.451230160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0507 19:55:40.742151    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:34.451320165Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:40.742151    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:34.451541578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:40.742151    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 cri-dockerd[1274]: time="2024-05-07T19:54:34Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/09d2fda974adf9dbabc54b3412155043fbda490a951a6b325ac66ef3e385e99d/resolv.conf as [nameserver 172.19.128.1]"
	I0507 19:55:40.742151    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 cri-dockerd[1274]: time="2024-05-07T19:54:34Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/deb171c003562d2f3e3c8e1ec2fbec5ecaa700e48e277dd0cc50addf6cbb21a3/resolv.conf as [nameserver 172.19.128.1]"
	I0507 19:55:40.742151    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 cri-dockerd[1274]: time="2024-05-07T19:54:34Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/857f6b563091091373f72d143ed2af0ab7469cb77eb82675a7f665d172f1793a/resolv.conf as [nameserver 172.19.128.1]"
	I0507 19:55:40.742151    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:34.950666506Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0507 19:55:40.742151    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:34.951075429Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0507 19:55:40.742151    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:34.951189235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:40.742151    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:34.951373146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:40.742151    5068 command_runner.go:130] > May 07 19:54:35 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:35.055721147Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0507 19:55:40.742151    5068 command_runner.go:130] > May 07 19:54:35 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:35.055815952Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0507 19:55:40.742151    5068 command_runner.go:130] > May 07 19:54:35 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:35.055860855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:40.742151    5068 command_runner.go:130] > May 07 19:54:35 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:35.056635099Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:40.742151    5068 command_runner.go:130] > May 07 19:54:35 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:35.189264699Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0507 19:55:40.742151    5068 command_runner.go:130] > May 07 19:54:35 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:35.189723325Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0507 19:55:40.742151    5068 command_runner.go:130] > May 07 19:54:35 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:35.189831731Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:40.742151    5068 command_runner.go:130] > May 07 19:54:35 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:35.190012442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:40.742151    5068 command_runner.go:130] > May 07 19:55:05 multinode-600000 dockerd[1047]: time="2024-05-07T19:55:05.347820040Z" level=info msg="ignoring event" container=d1e3e4629bc4ab52c27aca01f9ac01a28969e78a370077ee687920a51d952e19 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0507 19:55:40.742151    5068 command_runner.go:130] > May 07 19:55:05 multinode-600000 dockerd[1053]: time="2024-05-07T19:55:05.348040655Z" level=info msg="shim disconnected" id=d1e3e4629bc4ab52c27aca01f9ac01a28969e78a370077ee687920a51d952e19 namespace=moby
	I0507 19:55:40.742151    5068 command_runner.go:130] > May 07 19:55:05 multinode-600000 dockerd[1053]: time="2024-05-07T19:55:05.348091458Z" level=warning msg="cleaning up after shim disconnected" id=d1e3e4629bc4ab52c27aca01f9ac01a28969e78a370077ee687920a51d952e19 namespace=moby
	I0507 19:55:40.742151    5068 command_runner.go:130] > May 07 19:55:05 multinode-600000 dockerd[1053]: time="2024-05-07T19:55:05.348099558Z" level=info msg="cleaning up dead shim" namespace=moby
	I0507 19:55:40.742151    5068 command_runner.go:130] > May 07 19:55:17 multinode-600000 dockerd[1053]: time="2024-05-07T19:55:17.037412688Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0507 19:55:40.742151    5068 command_runner.go:130] > May 07 19:55:17 multinode-600000 dockerd[1053]: time="2024-05-07T19:55:17.037563097Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0507 19:55:40.742151    5068 command_runner.go:130] > May 07 19:55:17 multinode-600000 dockerd[1053]: time="2024-05-07T19:55:17.037957521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:40.742151    5068 command_runner.go:130] > May 07 19:55:17 multinode-600000 dockerd[1053]: time="2024-05-07T19:55:17.038368445Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:40.743113    5068 command_runner.go:130] > May 07 19:55:38 multinode-600000 dockerd[1053]: time="2024-05-07T19:55:38.073681495Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0507 19:55:40.743113    5068 command_runner.go:130] > May 07 19:55:38 multinode-600000 dockerd[1053]: time="2024-05-07T19:55:38.075144480Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0507 19:55:40.743113    5068 command_runner.go:130] > May 07 19:55:38 multinode-600000 dockerd[1053]: time="2024-05-07T19:55:38.075421996Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:40.743113    5068 command_runner.go:130] > May 07 19:55:38 multinode-600000 dockerd[1053]: time="2024-05-07T19:55:38.075618907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:40.743113    5068 command_runner.go:130] > May 07 19:55:38 multinode-600000 dockerd[1053]: time="2024-05-07T19:55:38.083978388Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0507 19:55:40.743113    5068 command_runner.go:130] > May 07 19:55:38 multinode-600000 dockerd[1053]: time="2024-05-07T19:55:38.085517877Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0507 19:55:40.743113    5068 command_runner.go:130] > May 07 19:55:38 multinode-600000 dockerd[1053]: time="2024-05-07T19:55:38.085609682Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:40.743113    5068 command_runner.go:130] > May 07 19:55:38 multinode-600000 dockerd[1053]: time="2024-05-07T19:55:38.085891498Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:40.743113    5068 command_runner.go:130] > May 07 19:55:38 multinode-600000 cri-dockerd[1274]: time="2024-05-07T19:55:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/56c438bec17775a85810d84da03e966b7c8b3307695f327170eb2d1f6f413190/resolv.conf as [nameserver 172.19.128.1]"
	I0507 19:55:40.743113    5068 command_runner.go:130] > May 07 19:55:38 multinode-600000 cri-dockerd[1274]: time="2024-05-07T19:55:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f8dc35309168fbb7208444e18cedbe0a5ab2522d363e8b998b56b731b941b23c/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0507 19:55:40.743113    5068 command_runner.go:130] > May 07 19:55:38 multinode-600000 dockerd[1053]: time="2024-05-07T19:55:38.552043154Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0507 19:55:40.743113    5068 command_runner.go:130] > May 07 19:55:38 multinode-600000 dockerd[1053]: time="2024-05-07T19:55:38.552176862Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0507 19:55:40.743113    5068 command_runner.go:130] > May 07 19:55:38 multinode-600000 dockerd[1053]: time="2024-05-07T19:55:38.552192263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:40.743113    5068 command_runner.go:130] > May 07 19:55:38 multinode-600000 dockerd[1053]: time="2024-05-07T19:55:38.552275368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:40.743113    5068 command_runner.go:130] > May 07 19:55:38 multinode-600000 dockerd[1053]: time="2024-05-07T19:55:38.595560233Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0507 19:55:40.743113    5068 command_runner.go:130] > May 07 19:55:38 multinode-600000 dockerd[1053]: time="2024-05-07T19:55:38.595882353Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0507 19:55:40.743113    5068 command_runner.go:130] > May 07 19:55:38 multinode-600000 dockerd[1053]: time="2024-05-07T19:55:38.595904855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:40.743113    5068 command_runner.go:130] > May 07 19:55:38 multinode-600000 dockerd[1053]: time="2024-05-07T19:55:38.596079265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:40.743113    5068 command_runner.go:130] > May 07 19:55:40 multinode-600000 dockerd[1047]: 2024/05/07 19:55:40 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0507 19:55:40.743113    5068 command_runner.go:130] > May 07 19:55:40 multinode-600000 dockerd[1047]: 2024/05/07 19:55:40 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0507 19:55:40.743113    5068 command_runner.go:130] > May 07 19:55:40 multinode-600000 dockerd[1047]: 2024/05/07 19:55:40 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0507 19:55:40.743113    5068 command_runner.go:130] > May 07 19:55:40 multinode-600000 dockerd[1047]: 2024/05/07 19:55:40 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0507 19:55:40.743113    5068 command_runner.go:130] > May 07 19:55:40 multinode-600000 dockerd[1047]: 2024/05/07 19:55:40 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0507 19:55:40.743113    5068 command_runner.go:130] > May 07 19:55:40 multinode-600000 dockerd[1047]: 2024/05/07 19:55:40 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0507 19:55:40.743113    5068 command_runner.go:130] > May 07 19:55:40 multinode-600000 dockerd[1047]: 2024/05/07 19:55:40 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0507 19:55:40.743113    5068 command_runner.go:130] > May 07 19:55:40 multinode-600000 dockerd[1047]: 2024/05/07 19:55:40 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0507 19:55:40.771969    5068 logs.go:123] Gathering logs for kubelet ...
	I0507 19:55:40.771969    5068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 19:55:40.802264    5068 command_runner.go:130] > May 07 19:54:25 multinode-600000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0507 19:55:40.802514    5068 command_runner.go:130] > May 07 19:54:25 multinode-600000 kubelet[1385]: I0507 19:54:25.312690    1385 server.go:484] "Kubelet version" kubeletVersion="v1.30.0"
	I0507 19:55:40.802514    5068 command_runner.go:130] > May 07 19:54:25 multinode-600000 kubelet[1385]: I0507 19:54:25.313053    1385 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0507 19:55:40.802667    5068 command_runner.go:130] > May 07 19:54:25 multinode-600000 kubelet[1385]: I0507 19:54:25.314038    1385 server.go:927] "Client rotation is on, will bootstrap in background"
	I0507 19:55:40.802667    5068 command_runner.go:130] > May 07 19:54:25 multinode-600000 kubelet[1385]: E0507 19:54:25.314980    1385 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0507 19:55:40.802809    5068 command_runner.go:130] > May 07 19:54:25 multinode-600000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0507 19:55:40.802809    5068 command_runner.go:130] > May 07 19:54:25 multinode-600000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0507 19:55:40.802809    5068 command_runner.go:130] > May 07 19:54:25 multinode-600000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0507 19:55:40.802939    5068 command_runner.go:130] > May 07 19:54:25 multinode-600000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0507 19:55:40.802939    5068 command_runner.go:130] > May 07 19:54:25 multinode-600000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0507 19:55:40.802939    5068 command_runner.go:130] > May 07 19:54:26 multinode-600000 kubelet[1417]: I0507 19:54:26.032056    1417 server.go:484] "Kubelet version" kubeletVersion="v1.30.0"
	I0507 19:55:40.803094    5068 command_runner.go:130] > May 07 19:54:26 multinode-600000 kubelet[1417]: I0507 19:54:26.032321    1417 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0507 19:55:40.803094    5068 command_runner.go:130] > May 07 19:54:26 multinode-600000 kubelet[1417]: I0507 19:54:26.032668    1417 server.go:927] "Client rotation is on, will bootstrap in background"
	I0507 19:55:40.803094    5068 command_runner.go:130] > May 07 19:54:26 multinode-600000 kubelet[1417]: E0507 19:54:26.032817    1417 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0507 19:55:40.803247    5068 command_runner.go:130] > May 07 19:54:26 multinode-600000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0507 19:55:40.803247    5068 command_runner.go:130] > May 07 19:54:26 multinode-600000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0507 19:55:40.803247    5068 command_runner.go:130] > May 07 19:54:26 multinode-600000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2.
	I0507 19:55:40.803408    5068 command_runner.go:130] > May 07 19:54:26 multinode-600000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0507 19:55:40.803408    5068 command_runner.go:130] > May 07 19:54:26 multinode-600000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0507 19:55:40.803408    5068 command_runner.go:130] > May 07 19:54:26 multinode-600000 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	I0507 19:55:40.803498    5068 command_runner.go:130] > May 07 19:54:26 multinode-600000 systemd[1]: kubelet.service: Deactivated successfully.
	I0507 19:55:40.803498    5068 command_runner.go:130] > May 07 19:54:26 multinode-600000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0507 19:55:40.803543    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0507 19:55:40.803576    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.682448    1526 server.go:484] "Kubelet version" kubeletVersion="v1.30.0"
	I0507 19:55:40.803576    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.683051    1526 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0507 19:55:40.803697    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.683318    1526 server.go:927] "Client rotation is on, will bootstrap in background"
	I0507 19:55:40.803749    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.685208    1526 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0507 19:55:40.803787    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.694353    1526 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0507 19:55:40.803836    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.719318    1526 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0507 19:55:40.803836    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.719480    1526 server.go:810] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I0507 19:55:40.803929    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.720216    1526 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0507 19:55:40.804127    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.720309    1526 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"multinode-600000","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"Top
ologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
	I0507 19:55:40.804127    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.720926    1526 topology_manager.go:138] "Creating topology manager with none policy"
	I0507 19:55:40.804201    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.721001    1526 container_manager_linux.go:301] "Creating device plugin manager"
	I0507 19:55:40.804247    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.721416    1526 state_mem.go:36] "Initialized new in-memory state store"
	I0507 19:55:40.804283    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.723173    1526 kubelet.go:400] "Attempting to sync node with API server"
	I0507 19:55:40.804283    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.723253    1526 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0507 19:55:40.804357    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.723313    1526 kubelet.go:312] "Adding apiserver pod source"
	I0507 19:55:40.804404    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.723974    1526 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0507 19:55:40.804439    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: W0507 19:54:28.726787    1526 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-600000&limit=500&resourceVersion=0": dial tcp 172.19.135.22:8443: connect: connection refused
	I0507 19:55:40.804515    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: E0507 19:54:28.726939    1526 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-600000&limit=500&resourceVersion=0": dial tcp 172.19.135.22:8443: connect: connection refused
	I0507 19:55:40.804560    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.731381    1526 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="docker" version="26.0.2" apiVersion="v1"
	I0507 19:55:40.804593    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.733269    1526 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I0507 19:55:40.804667    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: W0507 19:54:28.734851    1526 probe.go:272] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0507 19:55:40.804713    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.736816    1526 server.go:1264] "Started kubelet"
	I0507 19:55:40.804748    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: W0507 19:54:28.737228    1526 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.19.135.22:8443: connect: connection refused
	I0507 19:55:40.804830    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: E0507 19:54:28.737335    1526 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.19.135.22:8443: connect: connection refused
	I0507 19:55:40.804830    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.738410    1526 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
	I0507 19:55:40.804931    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.740846    1526 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I0507 19:55:40.804931    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.742005    1526 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0507 19:55:40.805038    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: E0507 19:54:28.742309    1526 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.19.135.22:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-600000.17cd4cf9c52f26de  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-600000,UID:multinode-600000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-600000,},FirstTimestamp:2024-05-07 19:54:28.736796382 +0000 UTC m=+0.138302022,LastTimestamp:2024-05-07 19:54:28.736796382 +0000 UTC m=+0.138302022,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-60
0000,}"
	I0507 19:55:40.805038    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.743118    1526 server.go:455] "Adding debug handlers to kubelet server"
	I0507 19:55:40.805144    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.749839    1526 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0507 19:55:40.805144    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.768561    1526 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
	I0507 19:55:40.805246    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: W0507 19:54:28.769072    1526 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.19.135.22:8443: connect: connection refused
	I0507 19:55:40.805246    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: E0507 19:54:28.769183    1526 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.19.135.22:8443: connect: connection refused
	I0507 19:55:40.805346    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.769400    1526 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I0507 19:55:40.805346    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.769456    1526 factory.go:221] Registration of the systemd container factory successfully
	I0507 19:55:40.805346    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.770894    1526 factory.go:219] Registration of the crio container factory failed: Get "http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)crio%!F(MISSING)crio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I0507 19:55:40.805445    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.772962    1526 volume_manager.go:291] "Starting Kubelet Volume Manager"
	I0507 19:55:40.805445    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: E0507 19:54:28.785539    1526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-600000?timeout=10s\": dial tcp 172.19.135.22:8443: connect: connection refused" interval="200ms"
	I0507 19:55:40.805545    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.791725    1526 reconciler.go:26] "Reconciler: start to sync state"
	I0507 19:55:40.805545    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.830988    1526 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0507 19:55:40.805646    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.840813    1526 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0507 19:55:40.805646    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.840916    1526 status_manager.go:217] "Starting to sync pod status with apiserver"
	I0507 19:55:40.805646    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.841140    1526 kubelet.go:2337] "Starting kubelet main sync loop"
	I0507 19:55:40.805747    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: E0507 19:54:28.841245    1526 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
	I0507 19:55:40.805747    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: W0507 19:54:28.856981    1526 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.19.135.22:8443: connect: connection refused
	I0507 19:55:40.805846    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: E0507 19:54:28.857107    1526 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.19.135.22:8443: connect: connection refused
	I0507 19:55:40.805846    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: E0507 19:54:28.863787    1526 iptables.go:577] "Could not set up iptables canary" err=<
	I0507 19:55:40.805846    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0507 19:55:40.805944    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0507 19:55:40.805944    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0507 19:55:40.806056    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0507 19:55:40.806056    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.867313    1526 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I0507 19:55:40.806111    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.867334    1526 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I0507 19:55:40.806111    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.867353    1526 state_mem.go:36] "Initialized new in-memory state store"
	I0507 19:55:40.806159    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.867956    1526 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0507 19:55:40.806213    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.867975    1526 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0507 19:55:40.806213    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.868003    1526 policy_none.go:49] "None policy: Start"
	I0507 19:55:40.806261    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.868488    1526 kubelet_node_status.go:73] "Attempting to register node" node="multinode-600000"
	I0507 19:55:40.806314    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: E0507 19:54:28.869266    1526 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.19.135.22:8443: connect: connection refused" node="multinode-600000"
	I0507 19:55:40.806373    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.874219    1526 memory_manager.go:170] "Starting memorymanager" policy="None"
	I0507 19:55:40.806428    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.874241    1526 state_mem.go:35] "Initializing new in-memory state store"
	I0507 19:55:40.806474    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.875298    1526 state_mem.go:75] "Updated machine memory state"
	I0507 19:55:40.806474    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.878167    1526 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0507 19:55:40.806592    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.878458    1526 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I0507 19:55:40.806592    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.880352    1526 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0507 19:55:40.806647    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: E0507 19:54:28.881798    1526 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-600000\" not found"
	I0507 19:55:40.806647    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.941803    1526 topology_manager.go:215] "Topology Admit Handler" podUID="cd9cba8f94818776ec6d8836322192b3" podNamespace="kube-system" podName="kube-apiserver-multinode-600000"
	I0507 19:55:40.806647    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.944197    1526 topology_manager.go:215] "Topology Admit Handler" podUID="f5d6aa60dc93b5e562f37ed2236c3022" podNamespace="kube-system" podName="kube-controller-manager-multinode-600000"
	I0507 19:55:40.806647    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.945407    1526 topology_manager.go:215] "Topology Admit Handler" podUID="7c4ee79f6d4f6adb00b636f817445fef" podNamespace="kube-system" podName="kube-scheduler-multinode-600000"
	I0507 19:55:40.806647    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.946291    1526 topology_manager.go:215] "Topology Admit Handler" podUID="1581bf6b00d338797c8fb8b10b74abde" podNamespace="kube-system" podName="etcd-multinode-600000"
	I0507 19:55:40.806647    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.947956    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86921e7643746441a6e93f7fb6fecdf7c7bf46b090192f2fc398129fad83dd9d"
	I0507 19:55:40.806647    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.947978    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70cff02905e8f07315ff7e01ce388c0da3246f3c03bb7c785b3b7979a31852a9"
	I0507 19:55:40.806647    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.948141    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58ebd877d77fb0eee19924ed195f0ccced541015095c32b9d58ab78831543622"
	I0507 19:55:40.806647    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.948156    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75f27faec2ed6996286f7030cea68f26137cea7abaedede628d29933fbde0ae9"
	I0507 19:55:40.806647    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.959165    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99af61c6e282aa13c7209e469e5e354f24968796fc455a65fdf2e8611f760994"
	I0507 19:55:40.806647    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.970524    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57950c0fdcbe4c7e6d3490c6477c947eac153e908d8e81090ef8205a050bb14c"
	I0507 19:55:40.806647    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: E0507 19:54:28.987462    1526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-600000?timeout=10s\": dial tcp 172.19.135.22:8443: connect: connection refused" interval="400ms"
	I0507 19:55:40.806647    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.989236    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca0d420373470a8f3b23bd3c9b5c59f5e7c4896da57782b69f9498d3ff333fb5"
	I0507 19:55:40.806647    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 kubelet[1526]: I0507 19:54:29.000822    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4afb10dc8b11575b4eaa25a6b283141c6e029c9b44d3db3a69e4c934171b778e"
	I0507 19:55:40.806647    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 kubelet[1526]: I0507 19:54:29.010098    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cd9cba8f94818776ec6d8836322192b3-k8s-certs\") pod \"kube-apiserver-multinode-600000\" (UID: \"cd9cba8f94818776ec6d8836322192b3\") " pod="kube-system/kube-apiserver-multinode-600000"
	I0507 19:55:40.806647    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 kubelet[1526]: I0507 19:54:29.010146    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f5d6aa60dc93b5e562f37ed2236c3022-flexvolume-dir\") pod \"kube-controller-manager-multinode-600000\" (UID: \"f5d6aa60dc93b5e562f37ed2236c3022\") " pod="kube-system/kube-controller-manager-multinode-600000"
	I0507 19:55:40.806647    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 kubelet[1526]: I0507 19:54:29.010167    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f5d6aa60dc93b5e562f37ed2236c3022-kubeconfig\") pod \"kube-controller-manager-multinode-600000\" (UID: \"f5d6aa60dc93b5e562f37ed2236c3022\") " pod="kube-system/kube-controller-manager-multinode-600000"
	I0507 19:55:40.806647    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 kubelet[1526]: I0507 19:54:29.010187    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c4ee79f6d4f6adb00b636f817445fef-kubeconfig\") pod \"kube-scheduler-multinode-600000\" (UID: \"7c4ee79f6d4f6adb00b636f817445fef\") " pod="kube-system/kube-scheduler-multinode-600000"
	I0507 19:55:40.806647    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 kubelet[1526]: I0507 19:54:29.010223    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/1581bf6b00d338797c8fb8b10b74abde-etcd-certs\") pod \"etcd-multinode-600000\" (UID: \"1581bf6b00d338797c8fb8b10b74abde\") " pod="kube-system/etcd-multinode-600000"
	I0507 19:55:40.807182    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 kubelet[1526]: I0507 19:54:29.010245    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cd9cba8f94818776ec6d8836322192b3-ca-certs\") pod \"kube-apiserver-multinode-600000\" (UID: \"cd9cba8f94818776ec6d8836322192b3\") " pod="kube-system/kube-apiserver-multinode-600000"
	I0507 19:55:40.807235    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 kubelet[1526]: I0507 19:54:29.010264    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f5d6aa60dc93b5e562f37ed2236c3022-ca-certs\") pod \"kube-controller-manager-multinode-600000\" (UID: \"f5d6aa60dc93b5e562f37ed2236c3022\") " pod="kube-system/kube-controller-manager-multinode-600000"
	I0507 19:55:40.807283    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 kubelet[1526]: I0507 19:54:29.010292    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f5d6aa60dc93b5e562f37ed2236c3022-k8s-certs\") pod \"kube-controller-manager-multinode-600000\" (UID: \"f5d6aa60dc93b5e562f37ed2236c3022\") " pod="kube-system/kube-controller-manager-multinode-600000"
	I0507 19:55:40.807397    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 kubelet[1526]: I0507 19:54:29.010323    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f5d6aa60dc93b5e562f37ed2236c3022-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-600000\" (UID: \"f5d6aa60dc93b5e562f37ed2236c3022\") " pod="kube-system/kube-controller-manager-multinode-600000"
	I0507 19:55:40.807449    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 kubelet[1526]: I0507 19:54:29.010365    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/1581bf6b00d338797c8fb8b10b74abde-etcd-data\") pod \"etcd-multinode-600000\" (UID: \"1581bf6b00d338797c8fb8b10b74abde\") " pod="kube-system/etcd-multinode-600000"
	I0507 19:55:40.807496    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 kubelet[1526]: I0507 19:54:29.010413    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cd9cba8f94818776ec6d8836322192b3-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-600000\" (UID: \"cd9cba8f94818776ec6d8836322192b3\") " pod="kube-system/kube-apiserver-multinode-600000"
	I0507 19:55:40.807548    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 kubelet[1526]: I0507 19:54:29.013343    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af16a92d7c1cc8f0246bdad95c9e580f729470ea118e03dce721c77127d06f56"
	I0507 19:55:40.807607    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 kubelet[1526]: I0507 19:54:29.071582    1526 kubelet_node_status.go:73] "Attempting to register node" node="multinode-600000"
	I0507 19:55:40.807660    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 kubelet[1526]: E0507 19:54:29.072513    1526 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.19.135.22:8443: connect: connection refused" node="multinode-600000"
	I0507 19:55:40.807765    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 kubelet[1526]: E0507 19:54:29.389792    1526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-600000?timeout=10s\": dial tcp 172.19.135.22:8443: connect: connection refused" interval="800ms"
	I0507 19:55:40.807765    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 kubelet[1526]: I0507 19:54:29.474674    1526 kubelet_node_status.go:73] "Attempting to register node" node="multinode-600000"
	I0507 19:55:40.807875    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 kubelet[1526]: E0507 19:54:29.475643    1526 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.19.135.22:8443: connect: connection refused" node="multinode-600000"
	I0507 19:55:40.807875    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 kubelet[1526]: W0507 19:54:29.564966    1526 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.19.135.22:8443: connect: connection refused
	I0507 19:55:40.807995    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 kubelet[1526]: E0507 19:54:29.565028    1526 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.19.135.22:8443: connect: connection refused
	I0507 19:55:40.808046    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 kubelet[1526]: W0507 19:54:29.712836    1526 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.19.135.22:8443: connect: connection refused
	I0507 19:55:40.808100    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 kubelet[1526]: E0507 19:54:29.712892    1526 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.19.135.22:8443: connect: connection refused
	I0507 19:55:40.808217    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 kubelet[1526]: W0507 19:54:29.898338    1526 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.19.135.22:8443: connect: connection refused
	I0507 19:55:40.808285    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 kubelet[1526]: E0507 19:54:29.898478    1526 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.19.135.22:8443: connect: connection refused
	I0507 19:55:40.808285    5068 command_runner.go:130] > May 07 19:54:30 multinode-600000 kubelet[1526]: W0507 19:54:30.187733    1526 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-600000&limit=500&resourceVersion=0": dial tcp 172.19.135.22:8443: connect: connection refused
	I0507 19:55:40.808285    5068 command_runner.go:130] > May 07 19:54:30 multinode-600000 kubelet[1526]: E0507 19:54:30.187857    1526 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-600000&limit=500&resourceVersion=0": dial tcp 172.19.135.22:8443: connect: connection refused
	I0507 19:55:40.808285    5068 command_runner.go:130] > May 07 19:54:30 multinode-600000 kubelet[1526]: E0507 19:54:30.195864    1526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-600000?timeout=10s\": dial tcp 172.19.135.22:8443: connect: connection refused" interval="1.6s"
	I0507 19:55:40.808285    5068 command_runner.go:130] > May 07 19:54:30 multinode-600000 kubelet[1526]: I0507 19:54:30.277090    1526 kubelet_node_status.go:73] "Attempting to register node" node="multinode-600000"
	I0507 19:55:40.808285    5068 command_runner.go:130] > May 07 19:54:30 multinode-600000 kubelet[1526]: E0507 19:54:30.278121    1526 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.19.135.22:8443: connect: connection refused" node="multinode-600000"
	I0507 19:55:40.808285    5068 command_runner.go:130] > May 07 19:54:31 multinode-600000 kubelet[1526]: I0507 19:54:31.880610    1526 kubelet_node_status.go:73] "Attempting to register node" node="multinode-600000"
	I0507 19:55:40.808285    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: I0507 19:54:33.731174    1526 apiserver.go:52] "Watching apiserver"
	I0507 19:55:40.808285    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: I0507 19:54:33.747542    1526 topology_manager.go:215] "Topology Admit Handler" podUID="d067d438-f4af-42e8-930d-3423a3ac211f" podNamespace="kube-system" podName="coredns-7db6d8ff4d-5j966"
	I0507 19:55:40.808285    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: I0507 19:54:33.747825    1526 topology_manager.go:215] "Topology Admit Handler" podUID="9a39807c-6243-4aa2-86f4-8626031c80a6" podNamespace="kube-system" podName="kube-proxy-c9gw5"
	I0507 19:55:40.808285    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: I0507 19:54:33.748122    1526 topology_manager.go:215] "Topology Admit Handler" podUID="b5145a4d-38aa-426e-947f-3480e269470e" podNamespace="kube-system" podName="kindnet-zw4r9"
	I0507 19:55:40.808285    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: I0507 19:54:33.748365    1526 topology_manager.go:215] "Topology Admit Handler" podUID="90142b77-53fb-42e1-94f8-7f8a3c7765ac" podNamespace="kube-system" podName="storage-provisioner"
	I0507 19:55:40.808285    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: I0507 19:54:33.748551    1526 topology_manager.go:215] "Topology Admit Handler" podUID="d98009ce-3495-481a-86b3-7c1e9422ca5a" podNamespace="default" podName="busybox-fc5497c4f-gcqlv"
	I0507 19:55:40.808285    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: E0507 19:54:33.749095    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-gcqlv" podUID="d98009ce-3495-481a-86b3-7c1e9422ca5a"
	I0507 19:55:40.808285    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: I0507 19:54:33.750550    1526 kubelet.go:1908] "Trying to delete pod" pod="kube-system/etcd-multinode-600000" podUID="d55601ee-11f4-432c-8170-ecc4d8212782"
	I0507 19:55:40.808285    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: E0507 19:54:33.750908    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-5j966" podUID="d067d438-f4af-42e8-930d-3423a3ac211f"
	I0507 19:55:40.808285    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: I0507 19:54:33.770134    1526 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	I0507 19:55:40.808285    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: I0507 19:54:33.810065    1526 kubelet_node_status.go:112] "Node was previously registered" node="multinode-600000"
	I0507 19:55:40.808285    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: I0507 19:54:33.810163    1526 kubelet_node_status.go:76] "Successfully registered node" node="multinode-600000"
	I0507 19:55:40.808285    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: I0507 19:54:33.818444    1526 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0507 19:55:40.808822    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: I0507 19:54:33.819648    1526 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0507 19:55:40.808933    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: I0507 19:54:33.820321    1526 setters.go:580] "Node became not ready" node="multinode-600000" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-05-07T19:54:33Z","lastTransitionTime":"2024-05-07T19:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0507 19:55:40.808933    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: I0507 19:54:33.837252    1526 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-600000"
	I0507 19:55:40.808933    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: I0507 19:54:33.845847    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9a39807c-6243-4aa2-86f4-8626031c80a6-lib-modules\") pod \"kube-proxy-c9gw5\" (UID: \"9a39807c-6243-4aa2-86f4-8626031c80a6\") " pod="kube-system/kube-proxy-c9gw5"
	I0507 19:55:40.808933    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: I0507 19:54:33.845991    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b5145a4d-38aa-426e-947f-3480e269470e-xtables-lock\") pod \"kindnet-zw4r9\" (UID: \"b5145a4d-38aa-426e-947f-3480e269470e\") " pod="kube-system/kindnet-zw4r9"
	I0507 19:55:40.808933    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: I0507 19:54:33.846149    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b5145a4d-38aa-426e-947f-3480e269470e-lib-modules\") pod \"kindnet-zw4r9\" (UID: \"b5145a4d-38aa-426e-947f-3480e269470e\") " pod="kube-system/kindnet-zw4r9"
	I0507 19:55:40.808933    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: I0507 19:54:33.846211    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/90142b77-53fb-42e1-94f8-7f8a3c7765ac-tmp\") pod \"storage-provisioner\" (UID: \"90142b77-53fb-42e1-94f8-7f8a3c7765ac\") " pod="kube-system/storage-provisioner"
	I0507 19:55:40.808933    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: I0507 19:54:33.846289    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/b5145a4d-38aa-426e-947f-3480e269470e-cni-cfg\") pod \"kindnet-zw4r9\" (UID: \"b5145a4d-38aa-426e-947f-3480e269470e\") " pod="kube-system/kindnet-zw4r9"
	I0507 19:55:40.808933    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: I0507 19:54:33.846373    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9a39807c-6243-4aa2-86f4-8626031c80a6-xtables-lock\") pod \"kube-proxy-c9gw5\" (UID: \"9a39807c-6243-4aa2-86f4-8626031c80a6\") " pod="kube-system/kube-proxy-c9gw5"
	I0507 19:55:40.808933    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: E0507 19:54:33.846904    1526 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0507 19:55:40.808933    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: E0507 19:54:33.847130    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d067d438-f4af-42e8-930d-3423a3ac211f-config-volume podName:d067d438-f4af-42e8-930d-3423a3ac211f nodeName:}" failed. No retries permitted until 2024-05-07 19:54:34.347095993 +0000 UTC m=+5.748601633 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d067d438-f4af-42e8-930d-3423a3ac211f-config-volume") pod "coredns-7db6d8ff4d-5j966" (UID: "d067d438-f4af-42e8-930d-3423a3ac211f") : object "kube-system"/"coredns" not registered
	I0507 19:55:40.808933    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: E0507 19:54:33.887296    1526 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0507 19:55:40.808933    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: E0507 19:54:33.887405    1526 projected.go:200] Error preparing data for projected volume kube-api-access-77z75 for pod default/busybox-fc5497c4f-gcqlv: object "default"/"kube-root-ca.crt" not registered
	I0507 19:55:40.808933    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: E0507 19:54:33.887613    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d98009ce-3495-481a-86b3-7c1e9422ca5a-kube-api-access-77z75 podName:d98009ce-3495-481a-86b3-7c1e9422ca5a nodeName:}" failed. No retries permitted until 2024-05-07 19:54:34.387566082 +0000 UTC m=+5.789071722 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-77z75" (UniqueName: "kubernetes.io/projected/d98009ce-3495-481a-86b3-7c1e9422ca5a-kube-api-access-77z75") pod "busybox-fc5497c4f-gcqlv" (UID: "d98009ce-3495-481a-86b3-7c1e9422ca5a") : object "default"/"kube-root-ca.crt" not registered
	I0507 19:55:40.808933    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: I0507 19:54:33.981303    1526 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-600000" podStartSLOduration=0.981289683 podStartE2EDuration="981.289683ms" podCreationTimestamp="2024-05-07 19:54:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-07 19:54:33.964275321 +0000 UTC m=+5.365780961" watchObservedRunningTime="2024-05-07 19:54:33.981289683 +0000 UTC m=+5.382795323"
	I0507 19:55:40.808933    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 kubelet[1526]: E0507 19:54:34.351653    1526 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0507 19:55:40.809469    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 kubelet[1526]: E0507 19:54:34.352036    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d067d438-f4af-42e8-930d-3423a3ac211f-config-volume podName:d067d438-f4af-42e8-930d-3423a3ac211f nodeName:}" failed. No retries permitted until 2024-05-07 19:54:35.352015549 +0000 UTC m=+6.753521289 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d067d438-f4af-42e8-930d-3423a3ac211f-config-volume") pod "coredns-7db6d8ff4d-5j966" (UID: "d067d438-f4af-42e8-930d-3423a3ac211f") : object "kube-system"/"coredns" not registered
	I0507 19:55:40.809517    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 kubelet[1526]: E0507 19:54:34.452926    1526 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0507 19:55:40.809570    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 kubelet[1526]: E0507 19:54:34.452966    1526 projected.go:200] Error preparing data for projected volume kube-api-access-77z75 for pod default/busybox-fc5497c4f-gcqlv: object "default"/"kube-root-ca.crt" not registered
	I0507 19:55:40.809570    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 kubelet[1526]: E0507 19:54:34.453012    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d98009ce-3495-481a-86b3-7c1e9422ca5a-kube-api-access-77z75 podName:d98009ce-3495-481a-86b3-7c1e9422ca5a nodeName:}" failed. No retries permitted until 2024-05-07 19:54:35.45299776 +0000 UTC m=+6.854503500 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-77z75" (UniqueName: "kubernetes.io/projected/d98009ce-3495-481a-86b3-7c1e9422ca5a-kube-api-access-77z75") pod "busybox-fc5497c4f-gcqlv" (UID: "d98009ce-3495-481a-86b3-7c1e9422ca5a") : object "default"/"kube-root-ca.crt" not registered
	I0507 19:55:40.809570    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 kubelet[1526]: I0507 19:54:34.661528    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="deb171c003562d2f3e3c8e1ec2fbec5ecaa700e48e277dd0cc50addf6cbb21a3"
	I0507 19:55:40.809570    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 kubelet[1526]: I0507 19:54:34.862381    1526 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4a96b44957f27b92ef21190115bc428" path="/var/lib/kubelet/pods/b4a96b44957f27b92ef21190115bc428/volumes"
	I0507 19:55:40.809570    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 kubelet[1526]: I0507 19:54:34.863294    1526 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d902475f151631231b80fe38edab39e8" path="/var/lib/kubelet/pods/d902475f151631231b80fe38edab39e8/volumes"
	I0507 19:55:40.809570    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 kubelet[1526]: I0507 19:54:34.938029    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="857f6b563091091373f72d143ed2af0ab7469cb77eb82675a7f665d172f1793a"
	I0507 19:55:40.809570    5068 command_runner.go:130] > May 07 19:54:35 multinode-600000 kubelet[1526]: I0507 19:54:35.108646    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09d2fda974adf9dbabc54b3412155043fbda490a951a6b325ac66ef3e385e99d"
	I0507 19:55:40.809570    5068 command_runner.go:130] > May 07 19:54:35 multinode-600000 kubelet[1526]: I0507 19:54:35.109054    1526 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-600000" podUID="c2ba4e1a-3041-4395-a246-9dd28358b95a"
	I0507 19:55:40.809570    5068 command_runner.go:130] > May 07 19:54:35 multinode-600000 kubelet[1526]: I0507 19:54:35.145688    1526 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-600000"
	I0507 19:55:40.809570    5068 command_runner.go:130] > May 07 19:54:35 multinode-600000 kubelet[1526]: E0507 19:54:35.358372    1526 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0507 19:55:40.809570    5068 command_runner.go:130] > May 07 19:54:35 multinode-600000 kubelet[1526]: E0507 19:54:35.358454    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d067d438-f4af-42e8-930d-3423a3ac211f-config-volume podName:d067d438-f4af-42e8-930d-3423a3ac211f nodeName:}" failed. No retries permitted until 2024-05-07 19:54:37.358438267 +0000 UTC m=+8.759943907 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d067d438-f4af-42e8-930d-3423a3ac211f-config-volume") pod "coredns-7db6d8ff4d-5j966" (UID: "d067d438-f4af-42e8-930d-3423a3ac211f") : object "kube-system"/"coredns" not registered
	I0507 19:55:40.809570    5068 command_runner.go:130] > May 07 19:54:35 multinode-600000 kubelet[1526]: E0507 19:54:35.459230    1526 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0507 19:55:40.809570    5068 command_runner.go:130] > May 07 19:54:35 multinode-600000 kubelet[1526]: E0507 19:54:35.459270    1526 projected.go:200] Error preparing data for projected volume kube-api-access-77z75 for pod default/busybox-fc5497c4f-gcqlv: object "default"/"kube-root-ca.crt" not registered
	I0507 19:55:40.809570    5068 command_runner.go:130] > May 07 19:54:35 multinode-600000 kubelet[1526]: E0507 19:54:35.459321    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d98009ce-3495-481a-86b3-7c1e9422ca5a-kube-api-access-77z75 podName:d98009ce-3495-481a-86b3-7c1e9422ca5a nodeName:}" failed. No retries permitted until 2024-05-07 19:54:37.459300671 +0000 UTC m=+8.860806411 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-77z75" (UniqueName: "kubernetes.io/projected/d98009ce-3495-481a-86b3-7c1e9422ca5a-kube-api-access-77z75") pod "busybox-fc5497c4f-gcqlv" (UID: "d98009ce-3495-481a-86b3-7c1e9422ca5a") : object "default"/"kube-root-ca.crt" not registered
	I0507 19:55:40.809570    5068 command_runner.go:130] > May 07 19:54:35 multinode-600000 kubelet[1526]: E0507 19:54:35.842389    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-5j966" podUID="d067d438-f4af-42e8-930d-3423a3ac211f"
	I0507 19:55:40.809570    5068 command_runner.go:130] > May 07 19:54:35 multinode-600000 kubelet[1526]: E0507 19:54:35.843885    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-gcqlv" podUID="d98009ce-3495-481a-86b3-7c1e9422ca5a"
	I0507 19:55:40.809570    5068 command_runner.go:130] > May 07 19:54:35 multinode-600000 kubelet[1526]: I0507 19:54:35.878265    1526 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-600000" podStartSLOduration=0.878244864 podStartE2EDuration="878.244864ms" podCreationTimestamp="2024-05-07 19:54:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-07 19:54:35.194323185 +0000 UTC m=+6.595828825" watchObservedRunningTime="2024-05-07 19:54:35.878244864 +0000 UTC m=+7.279750504"
	I0507 19:55:40.809570    5068 command_runner.go:130] > May 07 19:54:37 multinode-600000 kubelet[1526]: E0507 19:54:37.373090    1526 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0507 19:55:40.810183    5068 command_runner.go:130] > May 07 19:54:37 multinode-600000 kubelet[1526]: E0507 19:54:37.373161    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d067d438-f4af-42e8-930d-3423a3ac211f-config-volume podName:d067d438-f4af-42e8-930d-3423a3ac211f nodeName:}" failed. No retries permitted until 2024-05-07 19:54:41.373147008 +0000 UTC m=+12.774652748 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d067d438-f4af-42e8-930d-3423a3ac211f-config-volume") pod "coredns-7db6d8ff4d-5j966" (UID: "d067d438-f4af-42e8-930d-3423a3ac211f") : object "kube-system"/"coredns" not registered
	I0507 19:55:40.810183    5068 command_runner.go:130] > May 07 19:54:37 multinode-600000 kubelet[1526]: E0507 19:54:37.475199    1526 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0507 19:55:40.810183    5068 command_runner.go:130] > May 07 19:54:37 multinode-600000 kubelet[1526]: E0507 19:54:37.475408    1526 projected.go:200] Error preparing data for projected volume kube-api-access-77z75 for pod default/busybox-fc5497c4f-gcqlv: object "default"/"kube-root-ca.crt" not registered
	I0507 19:55:40.810183    5068 command_runner.go:130] > May 07 19:54:37 multinode-600000 kubelet[1526]: E0507 19:54:37.475544    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d98009ce-3495-481a-86b3-7c1e9422ca5a-kube-api-access-77z75 podName:d98009ce-3495-481a-86b3-7c1e9422ca5a nodeName:}" failed. No retries permitted until 2024-05-07 19:54:41.475519298 +0000 UTC m=+12.877025038 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-77z75" (UniqueName: "kubernetes.io/projected/d98009ce-3495-481a-86b3-7c1e9422ca5a-kube-api-access-77z75") pod "busybox-fc5497c4f-gcqlv" (UID: "d98009ce-3495-481a-86b3-7c1e9422ca5a") : object "default"/"kube-root-ca.crt" not registered
	I0507 19:55:40.810183    5068 command_runner.go:130] > May 07 19:54:37 multinode-600000 kubelet[1526]: E0507 19:54:37.842214    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-5j966" podUID="d067d438-f4af-42e8-930d-3423a3ac211f"
	I0507 19:55:40.810183    5068 command_runner.go:130] > May 07 19:54:37 multinode-600000 kubelet[1526]: E0507 19:54:37.842786    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-gcqlv" podUID="d98009ce-3495-481a-86b3-7c1e9422ca5a"
	I0507 19:55:40.810183    5068 command_runner.go:130] > May 07 19:54:39 multinode-600000 kubelet[1526]: E0507 19:54:39.842086    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-5j966" podUID="d067d438-f4af-42e8-930d-3423a3ac211f"
	I0507 19:55:40.810183    5068 command_runner.go:130] > May 07 19:54:39 multinode-600000 kubelet[1526]: E0507 19:54:39.842432    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-gcqlv" podUID="d98009ce-3495-481a-86b3-7c1e9422ca5a"
	I0507 19:55:40.810183    5068 command_runner.go:130] > May 07 19:54:41 multinode-600000 kubelet[1526]: E0507 19:54:41.418265    1526 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0507 19:55:40.810183    5068 command_runner.go:130] > May 07 19:54:41 multinode-600000 kubelet[1526]: E0507 19:54:41.418590    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d067d438-f4af-42e8-930d-3423a3ac211f-config-volume podName:d067d438-f4af-42e8-930d-3423a3ac211f nodeName:}" failed. No retries permitted until 2024-05-07 19:54:49.418553195 +0000 UTC m=+20.820058935 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d067d438-f4af-42e8-930d-3423a3ac211f-config-volume") pod "coredns-7db6d8ff4d-5j966" (UID: "d067d438-f4af-42e8-930d-3423a3ac211f") : object "kube-system"/"coredns" not registered
	I0507 19:55:40.810183    5068 command_runner.go:130] > May 07 19:54:41 multinode-600000 kubelet[1526]: E0507 19:54:41.518834    1526 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0507 19:55:40.810183    5068 command_runner.go:130] > May 07 19:54:41 multinode-600000 kubelet[1526]: E0507 19:54:41.519001    1526 projected.go:200] Error preparing data for projected volume kube-api-access-77z75 for pod default/busybox-fc5497c4f-gcqlv: object "default"/"kube-root-ca.crt" not registered
	I0507 19:55:40.810183    5068 command_runner.go:130] > May 07 19:54:41 multinode-600000 kubelet[1526]: E0507 19:54:41.519057    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d98009ce-3495-481a-86b3-7c1e9422ca5a-kube-api-access-77z75 podName:d98009ce-3495-481a-86b3-7c1e9422ca5a nodeName:}" failed. No retries permitted until 2024-05-07 19:54:49.519041878 +0000 UTC m=+20.920547618 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-77z75" (UniqueName: "kubernetes.io/projected/d98009ce-3495-481a-86b3-7c1e9422ca5a-kube-api-access-77z75") pod "busybox-fc5497c4f-gcqlv" (UID: "d98009ce-3495-481a-86b3-7c1e9422ca5a") : object "default"/"kube-root-ca.crt" not registered
	I0507 19:55:40.810183    5068 command_runner.go:130] > May 07 19:54:41 multinode-600000 kubelet[1526]: E0507 19:54:41.842245    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-5j966" podUID="d067d438-f4af-42e8-930d-3423a3ac211f"
	I0507 19:55:40.810183    5068 command_runner.go:130] > May 07 19:54:41 multinode-600000 kubelet[1526]: E0507 19:54:41.842350    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-gcqlv" podUID="d98009ce-3495-481a-86b3-7c1e9422ca5a"
	I0507 19:55:40.810183    5068 command_runner.go:130] > May 07 19:54:43 multinode-600000 kubelet[1526]: E0507 19:54:43.842034    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-5j966" podUID="d067d438-f4af-42e8-930d-3423a3ac211f"
	I0507 19:55:40.810183    5068 command_runner.go:130] > May 07 19:54:43 multinode-600000 kubelet[1526]: E0507 19:54:43.842216    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-gcqlv" podUID="d98009ce-3495-481a-86b3-7c1e9422ca5a"
	I0507 19:55:40.810183    5068 command_runner.go:130] > May 07 19:54:45 multinode-600000 kubelet[1526]: E0507 19:54:45.842657    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-5j966" podUID="d067d438-f4af-42e8-930d-3423a3ac211f"
	I0507 19:55:40.810183    5068 command_runner.go:130] > May 07 19:54:45 multinode-600000 kubelet[1526]: E0507 19:54:45.842807    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-gcqlv" podUID="d98009ce-3495-481a-86b3-7c1e9422ca5a"
	I0507 19:55:40.810183    5068 command_runner.go:130] > May 07 19:54:47 multinode-600000 kubelet[1526]: E0507 19:54:47.842575    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-5j966" podUID="d067d438-f4af-42e8-930d-3423a3ac211f"
	I0507 19:55:40.810183    5068 command_runner.go:130] > May 07 19:54:47 multinode-600000 kubelet[1526]: E0507 19:54:47.843152    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-gcqlv" podUID="d98009ce-3495-481a-86b3-7c1e9422ca5a"
	I0507 19:55:40.810183    5068 command_runner.go:130] > May 07 19:54:49 multinode-600000 kubelet[1526]: E0507 19:54:49.491796    1526 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0507 19:55:40.810183    5068 command_runner.go:130] > May 07 19:54:49 multinode-600000 kubelet[1526]: E0507 19:54:49.491989    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d067d438-f4af-42e8-930d-3423a3ac211f-config-volume podName:d067d438-f4af-42e8-930d-3423a3ac211f nodeName:}" failed. No retries permitted until 2024-05-07 19:55:05.491971903 +0000 UTC m=+36.893477643 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d067d438-f4af-42e8-930d-3423a3ac211f-config-volume") pod "coredns-7db6d8ff4d-5j966" (UID: "d067d438-f4af-42e8-930d-3423a3ac211f") : object "kube-system"/"coredns" not registered
	I0507 19:55:40.810183    5068 command_runner.go:130] > May 07 19:54:49 multinode-600000 kubelet[1526]: E0507 19:54:49.592490    1526 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0507 19:55:40.810183    5068 command_runner.go:130] > May 07 19:54:49 multinode-600000 kubelet[1526]: E0507 19:54:49.592595    1526 projected.go:200] Error preparing data for projected volume kube-api-access-77z75 for pod default/busybox-fc5497c4f-gcqlv: object "default"/"kube-root-ca.crt" not registered
	I0507 19:55:40.810183    5068 command_runner.go:130] > May 07 19:54:49 multinode-600000 kubelet[1526]: E0507 19:54:49.592653    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d98009ce-3495-481a-86b3-7c1e9422ca5a-kube-api-access-77z75 podName:d98009ce-3495-481a-86b3-7c1e9422ca5a nodeName:}" failed. No retries permitted until 2024-05-07 19:55:05.592637338 +0000 UTC m=+36.994142978 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-77z75" (UniqueName: "kubernetes.io/projected/d98009ce-3495-481a-86b3-7c1e9422ca5a-kube-api-access-77z75") pod "busybox-fc5497c4f-gcqlv" (UID: "d98009ce-3495-481a-86b3-7c1e9422ca5a") : object "default"/"kube-root-ca.crt" not registered
	I0507 19:55:40.810183    5068 command_runner.go:130] > May 07 19:54:49 multinode-600000 kubelet[1526]: E0507 19:54:49.842152    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-gcqlv" podUID="d98009ce-3495-481a-86b3-7c1e9422ca5a"
	I0507 19:55:40.810183    5068 command_runner.go:130] > May 07 19:54:49 multinode-600000 kubelet[1526]: E0507 19:54:49.842295    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-5j966" podUID="d067d438-f4af-42e8-930d-3423a3ac211f"
	I0507 19:55:40.810183    5068 command_runner.go:130] > May 07 19:54:51 multinode-600000 kubelet[1526]: E0507 19:54:51.841678    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-gcqlv" podUID="d98009ce-3495-481a-86b3-7c1e9422ca5a"
	I0507 19:55:40.810183    5068 command_runner.go:130] > May 07 19:54:51 multinode-600000 kubelet[1526]: E0507 19:54:51.841994    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-5j966" podUID="d067d438-f4af-42e8-930d-3423a3ac211f"
	I0507 19:55:40.810183    5068 command_runner.go:130] > May 07 19:54:53 multinode-600000 kubelet[1526]: E0507 19:54:53.841974    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-gcqlv" podUID="d98009ce-3495-481a-86b3-7c1e9422ca5a"
	I0507 19:55:40.811146    5068 command_runner.go:130] > May 07 19:54:53 multinode-600000 kubelet[1526]: E0507 19:54:53.842654    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-5j966" podUID="d067d438-f4af-42e8-930d-3423a3ac211f"
	I0507 19:55:40.811146    5068 command_runner.go:130] > May 07 19:54:55 multinode-600000 kubelet[1526]: E0507 19:54:55.842626    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-5j966" podUID="d067d438-f4af-42e8-930d-3423a3ac211f"
	I0507 19:55:40.811146    5068 command_runner.go:130] > May 07 19:54:55 multinode-600000 kubelet[1526]: E0507 19:54:55.842841    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-gcqlv" podUID="d98009ce-3495-481a-86b3-7c1e9422ca5a"
	I0507 19:55:40.811146    5068 command_runner.go:130] > May 07 19:54:57 multinode-600000 kubelet[1526]: E0507 19:54:57.841446    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-5j966" podUID="d067d438-f4af-42e8-930d-3423a3ac211f"
	I0507 19:55:40.811146    5068 command_runner.go:130] > May 07 19:54:57 multinode-600000 kubelet[1526]: E0507 19:54:57.842105    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-gcqlv" podUID="d98009ce-3495-481a-86b3-7c1e9422ca5a"
	I0507 19:55:40.811146    5068 command_runner.go:130] > May 07 19:54:59 multinode-600000 kubelet[1526]: E0507 19:54:59.842713    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-5j966" podUID="d067d438-f4af-42e8-930d-3423a3ac211f"
	I0507 19:55:40.811146    5068 command_runner.go:130] > May 07 19:54:59 multinode-600000 kubelet[1526]: E0507 19:54:59.842855    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-gcqlv" podUID="d98009ce-3495-481a-86b3-7c1e9422ca5a"
	I0507 19:55:40.811146    5068 command_runner.go:130] > May 07 19:55:01 multinode-600000 kubelet[1526]: E0507 19:55:01.842363    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-5j966" podUID="d067d438-f4af-42e8-930d-3423a3ac211f"
	I0507 19:55:40.811146    5068 command_runner.go:130] > May 07 19:55:01 multinode-600000 kubelet[1526]: E0507 19:55:01.842882    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-gcqlv" podUID="d98009ce-3495-481a-86b3-7c1e9422ca5a"
	I0507 19:55:40.811146    5068 command_runner.go:130] > May 07 19:55:03 multinode-600000 kubelet[1526]: E0507 19:55:03.841937    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-5j966" podUID="d067d438-f4af-42e8-930d-3423a3ac211f"
	I0507 19:55:40.811146    5068 command_runner.go:130] > May 07 19:55:03 multinode-600000 kubelet[1526]: E0507 19:55:03.841997    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-gcqlv" podUID="d98009ce-3495-481a-86b3-7c1e9422ca5a"
	I0507 19:55:40.811146    5068 command_runner.go:130] > May 07 19:55:05 multinode-600000 kubelet[1526]: I0507 19:55:05.501553    1526 scope.go:117] "RemoveContainer" containerID="232351adf489ab41e3b95183df116efc3adc75538ec9a57cef3b4ce608097033"
	I0507 19:55:40.811146    5068 command_runner.go:130] > May 07 19:55:05 multinode-600000 kubelet[1526]: I0507 19:55:05.501881    1526 scope.go:117] "RemoveContainer" containerID="d1e3e4629bc4ab52c27aca01f9ac01a28969e78a370077ee687920a51d952e19"
	I0507 19:55:40.811146    5068 command_runner.go:130] > May 07 19:55:05 multinode-600000 kubelet[1526]: E0507 19:55:05.502298    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(90142b77-53fb-42e1-94f8-7f8a3c7765ac)\"" pod="kube-system/storage-provisioner" podUID="90142b77-53fb-42e1-94f8-7f8a3c7765ac"
	I0507 19:55:40.811146    5068 command_runner.go:130] > May 07 19:55:05 multinode-600000 kubelet[1526]: E0507 19:55:05.529223    1526 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0507 19:55:40.811146    5068 command_runner.go:130] > May 07 19:55:05 multinode-600000 kubelet[1526]: E0507 19:55:05.529356    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d067d438-f4af-42e8-930d-3423a3ac211f-config-volume podName:d067d438-f4af-42e8-930d-3423a3ac211f nodeName:}" failed. No retries permitted until 2024-05-07 19:55:37.529338774 +0000 UTC m=+68.930844414 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d067d438-f4af-42e8-930d-3423a3ac211f-config-volume") pod "coredns-7db6d8ff4d-5j966" (UID: "d067d438-f4af-42e8-930d-3423a3ac211f") : object "kube-system"/"coredns" not registered
	I0507 19:55:40.811146    5068 command_runner.go:130] > May 07 19:55:05 multinode-600000 kubelet[1526]: E0507 19:55:05.629243    1526 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0507 19:55:40.811146    5068 command_runner.go:130] > May 07 19:55:05 multinode-600000 kubelet[1526]: E0507 19:55:05.629467    1526 projected.go:200] Error preparing data for projected volume kube-api-access-77z75 for pod default/busybox-fc5497c4f-gcqlv: object "default"/"kube-root-ca.crt" not registered
	I0507 19:55:40.811146    5068 command_runner.go:130] > May 07 19:55:05 multinode-600000 kubelet[1526]: E0507 19:55:05.629628    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d98009ce-3495-481a-86b3-7c1e9422ca5a-kube-api-access-77z75 podName:d98009ce-3495-481a-86b3-7c1e9422ca5a nodeName:}" failed. No retries permitted until 2024-05-07 19:55:37.629609811 +0000 UTC m=+69.031115551 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-77z75" (UniqueName: "kubernetes.io/projected/d98009ce-3495-481a-86b3-7c1e9422ca5a-kube-api-access-77z75") pod "busybox-fc5497c4f-gcqlv" (UID: "d98009ce-3495-481a-86b3-7c1e9422ca5a") : object "default"/"kube-root-ca.crt" not registered
	I0507 19:55:40.811146    5068 command_runner.go:130] > May 07 19:55:05 multinode-600000 kubelet[1526]: E0507 19:55:05.842421    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-5j966" podUID="d067d438-f4af-42e8-930d-3423a3ac211f"
	I0507 19:55:40.811146    5068 command_runner.go:130] > May 07 19:55:05 multinode-600000 kubelet[1526]: E0507 19:55:05.842632    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-gcqlv" podUID="d98009ce-3495-481a-86b3-7c1e9422ca5a"
	I0507 19:55:40.811146    5068 command_runner.go:130] > May 07 19:55:07 multinode-600000 kubelet[1526]: E0507 19:55:07.843040    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-gcqlv" podUID="d98009ce-3495-481a-86b3-7c1e9422ca5a"
	I0507 19:55:40.811146    5068 command_runner.go:130] > May 07 19:55:07 multinode-600000 kubelet[1526]: E0507 19:55:07.843857    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-5j966" podUID="d067d438-f4af-42e8-930d-3423a3ac211f"
	I0507 19:55:40.811146    5068 command_runner.go:130] > May 07 19:55:09 multinode-600000 kubelet[1526]: I0507 19:55:09.363617    1526 kubelet_node_status.go:497] "Fast updating node status as it just became ready"
	I0507 19:55:40.811146    5068 command_runner.go:130] > May 07 19:55:16 multinode-600000 kubelet[1526]: I0507 19:55:16.842451    1526 scope.go:117] "RemoveContainer" containerID="d1e3e4629bc4ab52c27aca01f9ac01a28969e78a370077ee687920a51d952e19"
	I0507 19:55:40.811146    5068 command_runner.go:130] > May 07 19:55:28 multinode-600000 kubelet[1526]: I0507 19:55:28.871479    1526 scope.go:117] "RemoveContainer" containerID="1ad9d594832564eb3ecbb3ab96ce2eec4cb095edf31a39c051d592ae068a9a6f"
	I0507 19:55:40.811146    5068 command_runner.go:130] > May 07 19:55:28 multinode-600000 kubelet[1526]: E0507 19:55:28.875911    1526 iptables.go:577] "Could not set up iptables canary" err=<
	I0507 19:55:40.811146    5068 command_runner.go:130] > May 07 19:55:28 multinode-600000 kubelet[1526]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0507 19:55:40.811146    5068 command_runner.go:130] > May 07 19:55:28 multinode-600000 kubelet[1526]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0507 19:55:40.811146    5068 command_runner.go:130] > May 07 19:55:28 multinode-600000 kubelet[1526]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0507 19:55:40.811146    5068 command_runner.go:130] > May 07 19:55:28 multinode-600000 kubelet[1526]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0507 19:55:40.811146    5068 command_runner.go:130] > May 07 19:55:28 multinode-600000 kubelet[1526]: I0507 19:55:28.916075    1526 scope.go:117] "RemoveContainer" containerID="675dcdcafeef04c4b82949c75f102ba97dda812ac3352b02e00d56d085f5d3bc"
	I0507 19:55:40.849739    5068 logs.go:123] Gathering logs for describe nodes ...
	I0507 19:55:40.849739    5068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 19:55:41.051853    5068 command_runner.go:130] > Name:               multinode-600000
	I0507 19:55:41.051952    5068 command_runner.go:130] > Roles:              control-plane
	I0507 19:55:41.051952    5068 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0507 19:55:41.052011    5068 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0507 19:55:41.052011    5068 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0507 19:55:41.052011    5068 command_runner.go:130] >                     kubernetes.io/hostname=multinode-600000
	I0507 19:55:41.052011    5068 command_runner.go:130] >                     kubernetes.io/os=linux
	I0507 19:55:41.052080    5068 command_runner.go:130] >                     minikube.k8s.io/commit=a2bee053733709aad5480b65159f65519e411d9f
	I0507 19:55:41.052080    5068 command_runner.go:130] >                     minikube.k8s.io/name=multinode-600000
	I0507 19:55:41.052080    5068 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0507 19:55:41.052080    5068 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_05_07T19_33_45_0700
	I0507 19:55:41.052138    5068 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.0
	I0507 19:55:41.052138    5068 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0507 19:55:41.052138    5068 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0507 19:55:41.052196    5068 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0507 19:55:41.052196    5068 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0507 19:55:41.052251    5068 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0507 19:55:41.052251    5068 command_runner.go:130] > CreationTimestamp:  Tue, 07 May 2024 19:33:41 +0000
	I0507 19:55:41.052359    5068 command_runner.go:130] > Taints:             <none>
	I0507 19:55:41.052359    5068 command_runner.go:130] > Unschedulable:      false
	I0507 19:55:41.052359    5068 command_runner.go:130] > Lease:
	I0507 19:55:41.052359    5068 command_runner.go:130] >   HolderIdentity:  multinode-600000
	I0507 19:55:41.052359    5068 command_runner.go:130] >   AcquireTime:     <unset>
	I0507 19:55:41.052359    5068 command_runner.go:130] >   RenewTime:       Tue, 07 May 2024 19:55:35 +0000
	I0507 19:55:41.052359    5068 command_runner.go:130] > Conditions:
	I0507 19:55:41.052359    5068 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0507 19:55:41.052359    5068 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0507 19:55:41.052359    5068 command_runner.go:130] >   MemoryPressure   False   Tue, 07 May 2024 19:55:09 +0000   Tue, 07 May 2024 19:33:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0507 19:55:41.052359    5068 command_runner.go:130] >   DiskPressure     False   Tue, 07 May 2024 19:55:09 +0000   Tue, 07 May 2024 19:33:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0507 19:55:41.052359    5068 command_runner.go:130] >   PIDPressure      False   Tue, 07 May 2024 19:55:09 +0000   Tue, 07 May 2024 19:33:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0507 19:55:41.052359    5068 command_runner.go:130] >   Ready            True    Tue, 07 May 2024 19:55:09 +0000   Tue, 07 May 2024 19:55:09 +0000   KubeletReady                 kubelet is posting ready status
	I0507 19:55:41.052359    5068 command_runner.go:130] > Addresses:
	I0507 19:55:41.052359    5068 command_runner.go:130] >   InternalIP:  172.19.135.22
	I0507 19:55:41.052359    5068 command_runner.go:130] >   Hostname:    multinode-600000
	I0507 19:55:41.052359    5068 command_runner.go:130] > Capacity:
	I0507 19:55:41.052359    5068 command_runner.go:130] >   cpu:                2
	I0507 19:55:41.052359    5068 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0507 19:55:41.052359    5068 command_runner.go:130] >   hugepages-2Mi:      0
	I0507 19:55:41.052359    5068 command_runner.go:130] >   memory:             2164264Ki
	I0507 19:55:41.052359    5068 command_runner.go:130] >   pods:               110
	I0507 19:55:41.052359    5068 command_runner.go:130] > Allocatable:
	I0507 19:55:41.052359    5068 command_runner.go:130] >   cpu:                2
	I0507 19:55:41.052359    5068 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0507 19:55:41.052359    5068 command_runner.go:130] >   hugepages-2Mi:      0
	I0507 19:55:41.052359    5068 command_runner.go:130] >   memory:             2164264Ki
	I0507 19:55:41.052359    5068 command_runner.go:130] >   pods:               110
	I0507 19:55:41.052359    5068 command_runner.go:130] > System Info:
	I0507 19:55:41.052359    5068 command_runner.go:130] >   Machine ID:                 fa6f1530e0ab4546b96ea753f13add46
	I0507 19:55:41.052359    5068 command_runner.go:130] >   System UUID:                f3433f71-57fc-a747-9f8d-4f98c0c4b458
	I0507 19:55:41.052903    5068 command_runner.go:130] >   Boot ID:                    93b81312-340b-4997-83aa-cdf61cfe3dbf
	I0507 19:55:41.052903    5068 command_runner.go:130] >   Kernel Version:             5.10.207
	I0507 19:55:41.052903    5068 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0507 19:55:41.052903    5068 command_runner.go:130] >   Operating System:           linux
	I0507 19:55:41.052903    5068 command_runner.go:130] >   Architecture:               amd64
	I0507 19:55:41.052903    5068 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0507 19:55:41.053018    5068 command_runner.go:130] >   Kubelet Version:            v1.30.0
	I0507 19:55:41.053018    5068 command_runner.go:130] >   Kube-Proxy Version:         v1.30.0
	I0507 19:55:41.053018    5068 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0507 19:55:41.053090    5068 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0507 19:55:41.053090    5068 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0507 19:55:41.053090    5068 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0507 19:55:41.053172    5068 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0507 19:55:41.053172    5068 command_runner.go:130] >   default                     busybox-fc5497c4f-gcqlv                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	I0507 19:55:41.053239    5068 command_runner.go:130] >   kube-system                 coredns-7db6d8ff4d-5j966                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     21m
	I0507 19:55:41.053239    5068 command_runner.go:130] >   kube-system                 etcd-multinode-600000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         68s
	I0507 19:55:41.053309    5068 command_runner.go:130] >   kube-system                 kindnet-zw4r9                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      21m
	I0507 19:55:41.053309    5068 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-600000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         66s
	I0507 19:55:41.053375    5068 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-600000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	I0507 19:55:41.053442    5068 command_runner.go:130] >   kube-system                 kube-proxy-c9gw5                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	I0507 19:55:41.053442    5068 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-600000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	I0507 19:55:41.053508    5068 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	I0507 19:55:41.053508    5068 command_runner.go:130] > Allocated resources:
	I0507 19:55:41.053576    5068 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0507 19:55:41.053576    5068 command_runner.go:130] >   Resource           Requests     Limits
	I0507 19:55:41.053651    5068 command_runner.go:130] >   --------           --------     ------
	I0507 19:55:41.053675    5068 command_runner.go:130] >   cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	I0507 19:55:41.053675    5068 command_runner.go:130] >   memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	I0507 19:55:41.053675    5068 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0507 19:55:41.053675    5068 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0507 19:55:41.053675    5068 command_runner.go:130] > Events:
	I0507 19:55:41.053675    5068 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0507 19:55:41.053675    5068 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0507 19:55:41.053675    5068 command_runner.go:130] >   Normal  Starting                 21m                kube-proxy       
	I0507 19:55:41.053675    5068 command_runner.go:130] >   Normal  Starting                 65s                kube-proxy       
	I0507 19:55:41.053675    5068 command_runner.go:130] >   Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node multinode-600000 status is now: NodeHasSufficientMemory
	I0507 19:55:41.053675    5068 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node multinode-600000 status is now: NodeHasNoDiskPressure
	I0507 19:55:41.053675    5068 command_runner.go:130] >   Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node multinode-600000 status is now: NodeHasSufficientPID
	I0507 19:55:41.053675    5068 command_runner.go:130] >   Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	I0507 19:55:41.053675    5068 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    21m                kubelet          Node multinode-600000 status is now: NodeHasNoDiskPressure
	I0507 19:55:41.053675    5068 command_runner.go:130] >   Normal  NodeHasSufficientMemory  21m                kubelet          Node multinode-600000 status is now: NodeHasSufficientMemory
	I0507 19:55:41.053675    5068 command_runner.go:130] >   Normal  NodeHasSufficientPID     21m                kubelet          Node multinode-600000 status is now: NodeHasSufficientPID
	I0507 19:55:41.053675    5068 command_runner.go:130] >   Normal  NodeAllocatableEnforced  21m                kubelet          Updated Node Allocatable limit across pods
	I0507 19:55:41.053675    5068 command_runner.go:130] >   Normal  Starting                 21m                kubelet          Starting kubelet.
	I0507 19:55:41.053675    5068 command_runner.go:130] >   Normal  RegisteredNode           21m                node-controller  Node multinode-600000 event: Registered Node multinode-600000 in Controller
	I0507 19:55:41.053675    5068 command_runner.go:130] >   Normal  NodeReady                21m                kubelet          Node multinode-600000 status is now: NodeReady
	I0507 19:55:41.053675    5068 command_runner.go:130] >   Normal  Starting                 73s                kubelet          Starting kubelet.
	I0507 19:55:41.053675    5068 command_runner.go:130] >   Normal  NodeHasSufficientPID     73s (x7 over 73s)  kubelet          Node multinode-600000 status is now: NodeHasSufficientPID
	I0507 19:55:41.053675    5068 command_runner.go:130] >   Normal  NodeAllocatableEnforced  73s                kubelet          Updated Node Allocatable limit across pods
	I0507 19:55:41.053675    5068 command_runner.go:130] >   Normal  NodeHasSufficientMemory  72s (x8 over 73s)  kubelet          Node multinode-600000 status is now: NodeHasSufficientMemory
	I0507 19:55:41.054213    5068 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    72s (x8 over 73s)  kubelet          Node multinode-600000 status is now: NodeHasNoDiskPressure
	I0507 19:55:41.054213    5068 command_runner.go:130] >   Normal  RegisteredNode           55s                node-controller  Node multinode-600000 event: Registered Node multinode-600000 in Controller
	I0507 19:55:41.054213    5068 command_runner.go:130] > Name:               multinode-600000-m02
	I0507 19:55:41.054213    5068 command_runner.go:130] > Roles:              <none>
	I0507 19:55:41.054296    5068 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0507 19:55:41.054324    5068 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0507 19:55:41.054324    5068 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0507 19:55:41.054324    5068 command_runner.go:130] >                     kubernetes.io/hostname=multinode-600000-m02
	I0507 19:55:41.054324    5068 command_runner.go:130] >                     kubernetes.io/os=linux
	I0507 19:55:41.054324    5068 command_runner.go:130] >                     minikube.k8s.io/commit=a2bee053733709aad5480b65159f65519e411d9f
	I0507 19:55:41.054324    5068 command_runner.go:130] >                     minikube.k8s.io/name=multinode-600000
	I0507 19:55:41.054442    5068 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0507 19:55:41.054442    5068 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_05_07T19_36_40_0700
	I0507 19:55:41.054442    5068 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.0
	I0507 19:55:41.054442    5068 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0507 19:55:41.054442    5068 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0507 19:55:41.054560    5068 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0507 19:55:41.054621    5068 command_runner.go:130] > CreationTimestamp:  Tue, 07 May 2024 19:36:39 +0000
	I0507 19:55:41.054621    5068 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0507 19:55:41.054675    5068 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0507 19:55:41.054695    5068 command_runner.go:130] > Unschedulable:      false
	I0507 19:55:41.054719    5068 command_runner.go:130] > Lease:
	I0507 19:55:41.054776    5068 command_runner.go:130] >   HolderIdentity:  multinode-600000-m02
	I0507 19:55:41.054776    5068 command_runner.go:130] >   AcquireTime:     <unset>
	I0507 19:55:41.054776    5068 command_runner.go:130] >   RenewTime:       Tue, 07 May 2024 19:51:38 +0000
	I0507 19:55:41.054876    5068 command_runner.go:130] > Conditions:
	I0507 19:55:41.054909    5068 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0507 19:55:41.054931    5068 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0507 19:55:41.055006    5068 command_runner.go:130] >   MemoryPressure   Unknown   Tue, 07 May 2024 19:47:54 +0000   Tue, 07 May 2024 19:55:26 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0507 19:55:41.055087    5068 command_runner.go:130] >   DiskPressure     Unknown   Tue, 07 May 2024 19:47:54 +0000   Tue, 07 May 2024 19:55:26 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0507 19:55:41.055087    5068 command_runner.go:130] >   PIDPressure      Unknown   Tue, 07 May 2024 19:47:54 +0000   Tue, 07 May 2024 19:55:26 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0507 19:55:41.055161    5068 command_runner.go:130] >   Ready            Unknown   Tue, 07 May 2024 19:47:54 +0000   Tue, 07 May 2024 19:55:26 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0507 19:55:41.055219    5068 command_runner.go:130] > Addresses:
	I0507 19:55:41.055241    5068 command_runner.go:130] >   InternalIP:  172.19.143.144
	I0507 19:55:41.055241    5068 command_runner.go:130] >   Hostname:    multinode-600000-m02
	I0507 19:55:41.055241    5068 command_runner.go:130] > Capacity:
	I0507 19:55:41.055296    5068 command_runner.go:130] >   cpu:                2
	I0507 19:55:41.055316    5068 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0507 19:55:41.055341    5068 command_runner.go:130] >   hugepages-2Mi:      0
	I0507 19:55:41.055341    5068 command_runner.go:130] >   memory:             2164264Ki
	I0507 19:55:41.055375    5068 command_runner.go:130] >   pods:               110
	I0507 19:55:41.055396    5068 command_runner.go:130] > Allocatable:
	I0507 19:55:41.055396    5068 command_runner.go:130] >   cpu:                2
	I0507 19:55:41.055450    5068 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0507 19:55:41.055471    5068 command_runner.go:130] >   hugepages-2Mi:      0
	I0507 19:55:41.055550    5068 command_runner.go:130] >   memory:             2164264Ki
	I0507 19:55:41.055550    5068 command_runner.go:130] >   pods:               110
	I0507 19:55:41.055550    5068 command_runner.go:130] > System Info:
	I0507 19:55:41.055609    5068 command_runner.go:130] >   Machine ID:                 34eb4e78cde1423b93517d0087c85f3c
	I0507 19:55:41.055609    5068 command_runner.go:130] >   System UUID:                7ed694c3-4cb4-954c-b244-d0ff36461420
	I0507 19:55:41.055664    5068 command_runner.go:130] >   Boot ID:                    6dd39eeb-a923-4a09-93c8-8c26dd122d68
	I0507 19:55:41.055684    5068 command_runner.go:130] >   Kernel Version:             5.10.207
	I0507 19:55:41.055763    5068 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0507 19:55:41.055763    5068 command_runner.go:130] >   Operating System:           linux
	I0507 19:55:41.055763    5068 command_runner.go:130] >   Architecture:               amd64
	I0507 19:55:41.055837    5068 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0507 19:55:41.055862    5068 command_runner.go:130] >   Kubelet Version:            v1.30.0
	I0507 19:55:41.055862    5068 command_runner.go:130] >   Kube-Proxy Version:         v1.30.0
	I0507 19:55:41.055862    5068 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0507 19:55:41.055862    5068 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0507 19:55:41.055862    5068 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0507 19:55:41.055862    5068 command_runner.go:130] >   Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0507 19:55:41.055862    5068 command_runner.go:130] >   ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	I0507 19:55:41.055862    5068 command_runner.go:130] >   default                     busybox-fc5497c4f-cpw2r    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	I0507 19:55:41.055862    5068 command_runner.go:130] >   kube-system                 kindnet-jmlw2              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      19m
	I0507 19:55:41.055862    5068 command_runner.go:130] >   kube-system                 kube-proxy-9fb6t           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	I0507 19:55:41.055862    5068 command_runner.go:130] > Allocated resources:
	I0507 19:55:41.055862    5068 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0507 19:55:41.055862    5068 command_runner.go:130] >   Resource           Requests   Limits
	I0507 19:55:41.055862    5068 command_runner.go:130] >   --------           --------   ------
	I0507 19:55:41.055862    5068 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0507 19:55:41.055862    5068 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0507 19:55:41.055862    5068 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0507 19:55:41.055862    5068 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0507 19:55:41.055862    5068 command_runner.go:130] > Events:
	I0507 19:55:41.055862    5068 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0507 19:55:41.055862    5068 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0507 19:55:41.055862    5068 command_runner.go:130] >   Normal  Starting                 18m                kube-proxy       
	I0507 19:55:41.055862    5068 command_runner.go:130] >   Normal  NodeHasSufficientMemory  19m (x2 over 19m)  kubelet          Node multinode-600000-m02 status is now: NodeHasSufficientMemory
	I0507 19:55:41.055862    5068 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    19m (x2 over 19m)  kubelet          Node multinode-600000-m02 status is now: NodeHasNoDiskPressure
	I0507 19:55:41.055862    5068 command_runner.go:130] >   Normal  NodeHasSufficientPID     19m (x2 over 19m)  kubelet          Node multinode-600000-m02 status is now: NodeHasSufficientPID
	I0507 19:55:41.055862    5068 command_runner.go:130] >   Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	I0507 19:55:41.056387    5068 command_runner.go:130] >   Normal  RegisteredNode           18m                node-controller  Node multinode-600000-m02 event: Registered Node multinode-600000-m02 in Controller
	I0507 19:55:41.056457    5068 command_runner.go:130] >   Normal  NodeReady                18m                kubelet          Node multinode-600000-m02 status is now: NodeReady
	I0507 19:55:41.056522    5068 command_runner.go:130] >   Normal  RegisteredNode           55s                node-controller  Node multinode-600000-m02 event: Registered Node multinode-600000-m02 in Controller
	I0507 19:55:41.056522    5068 command_runner.go:130] >   Normal  NodeNotReady             15s                node-controller  Node multinode-600000-m02 status is now: NodeNotReady
	I0507 19:55:41.056522    5068 command_runner.go:130] > Name:               multinode-600000-m03
	I0507 19:55:41.056586    5068 command_runner.go:130] > Roles:              <none>
	I0507 19:55:41.056586    5068 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0507 19:55:41.056648    5068 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0507 19:55:41.056648    5068 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0507 19:55:41.056648    5068 command_runner.go:130] >                     kubernetes.io/hostname=multinode-600000-m03
	I0507 19:55:41.056711    5068 command_runner.go:130] >                     kubernetes.io/os=linux
	I0507 19:55:41.056711    5068 command_runner.go:130] >                     minikube.k8s.io/commit=a2bee053733709aad5480b65159f65519e411d9f
	I0507 19:55:41.056711    5068 command_runner.go:130] >                     minikube.k8s.io/name=multinode-600000
	I0507 19:55:41.056711    5068 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0507 19:55:41.056711    5068 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_05_07T19_50_26_0700
	I0507 19:55:41.056711    5068 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.0
	I0507 19:55:41.056711    5068 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0507 19:55:41.056711    5068 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0507 19:55:41.056884    5068 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0507 19:55:41.056884    5068 command_runner.go:130] > CreationTimestamp:  Tue, 07 May 2024 19:50:25 +0000
	I0507 19:55:41.056884    5068 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0507 19:55:41.056884    5068 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0507 19:55:41.056959    5068 command_runner.go:130] > Unschedulable:      false
	I0507 19:55:41.056959    5068 command_runner.go:130] > Lease:
	I0507 19:55:41.056959    5068 command_runner.go:130] >   HolderIdentity:  multinode-600000-m03
	I0507 19:55:41.056959    5068 command_runner.go:130] >   AcquireTime:     <unset>
	I0507 19:55:41.056959    5068 command_runner.go:130] >   RenewTime:       Tue, 07 May 2024 19:51:16 +0000
	I0507 19:55:41.057065    5068 command_runner.go:130] > Conditions:
	I0507 19:55:41.057094    5068 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0507 19:55:41.057094    5068 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0507 19:55:41.057094    5068 command_runner.go:130] >   MemoryPressure   Unknown   Tue, 07 May 2024 19:50:31 +0000   Tue, 07 May 2024 19:51:58 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0507 19:55:41.057205    5068 command_runner.go:130] >   DiskPressure     Unknown   Tue, 07 May 2024 19:50:31 +0000   Tue, 07 May 2024 19:51:58 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0507 19:55:41.057239    5068 command_runner.go:130] >   PIDPressure      Unknown   Tue, 07 May 2024 19:50:31 +0000   Tue, 07 May 2024 19:51:58 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0507 19:55:41.057239    5068 command_runner.go:130] >   Ready            Unknown   Tue, 07 May 2024 19:50:31 +0000   Tue, 07 May 2024 19:51:58 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0507 19:55:41.057239    5068 command_runner.go:130] > Addresses:
	I0507 19:55:41.057239    5068 command_runner.go:130] >   InternalIP:  172.19.129.4
	I0507 19:55:41.057239    5068 command_runner.go:130] >   Hostname:    multinode-600000-m03
	I0507 19:55:41.057239    5068 command_runner.go:130] > Capacity:
	I0507 19:55:41.057239    5068 command_runner.go:130] >   cpu:                2
	I0507 19:55:41.057239    5068 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0507 19:55:41.057239    5068 command_runner.go:130] >   hugepages-2Mi:      0
	I0507 19:55:41.057239    5068 command_runner.go:130] >   memory:             2164264Ki
	I0507 19:55:41.057239    5068 command_runner.go:130] >   pods:               110
	I0507 19:55:41.057239    5068 command_runner.go:130] > Allocatable:
	I0507 19:55:41.057239    5068 command_runner.go:130] >   cpu:                2
	I0507 19:55:41.057239    5068 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0507 19:55:41.057239    5068 command_runner.go:130] >   hugepages-2Mi:      0
	I0507 19:55:41.057239    5068 command_runner.go:130] >   memory:             2164264Ki
	I0507 19:55:41.057239    5068 command_runner.go:130] >   pods:               110
	I0507 19:55:41.057239    5068 command_runner.go:130] > System Info:
	I0507 19:55:41.057239    5068 command_runner.go:130] >   Machine ID:                 380df77fae65410dba19d02344fea647
	I0507 19:55:41.057239    5068 command_runner.go:130] >   System UUID:                ed9d4a55-0088-004e-addb-543af9e02720
	I0507 19:55:41.057239    5068 command_runner.go:130] >   Boot ID:                    e0ec4add-64d0-47e3-9547-3261cfbddd3a
	I0507 19:55:41.057239    5068 command_runner.go:130] >   Kernel Version:             5.10.207
	I0507 19:55:41.057239    5068 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0507 19:55:41.057239    5068 command_runner.go:130] >   Operating System:           linux
	I0507 19:55:41.057239    5068 command_runner.go:130] >   Architecture:               amd64
	I0507 19:55:41.057239    5068 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0507 19:55:41.057239    5068 command_runner.go:130] >   Kubelet Version:            v1.30.0
	I0507 19:55:41.057239    5068 command_runner.go:130] >   Kube-Proxy Version:         v1.30.0
	I0507 19:55:41.057239    5068 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0507 19:55:41.057239    5068 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0507 19:55:41.057239    5068 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0507 19:55:41.057239    5068 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0507 19:55:41.057239    5068 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0507 19:55:41.057239    5068 command_runner.go:130] >   kube-system                 kindnet-dkxzt       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	I0507 19:55:41.057239    5068 command_runner.go:130] >   kube-system                 kube-proxy-pzn8q    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	I0507 19:55:41.057239    5068 command_runner.go:130] > Allocated resources:
	I0507 19:55:41.057239    5068 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0507 19:55:41.057239    5068 command_runner.go:130] >   Resource           Requests   Limits
	I0507 19:55:41.057770    5068 command_runner.go:130] >   --------           --------   ------
	I0507 19:55:41.057770    5068 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0507 19:55:41.057770    5068 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0507 19:55:41.057770    5068 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0507 19:55:41.057770    5068 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0507 19:55:41.057853    5068 command_runner.go:130] > Events:
	I0507 19:55:41.057853    5068 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0507 19:55:41.057853    5068 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0507 19:55:41.057853    5068 command_runner.go:130] >   Normal  Starting                 5m12s                  kube-proxy       
	I0507 19:55:41.057906    5068 command_runner.go:130] >   Normal  Starting                 14m                    kube-proxy       
	I0507 19:55:41.057906    5068 command_runner.go:130] >   Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	I0507 19:55:41.057972    5068 command_runner.go:130] >   Normal  NodeHasSufficientMemory  14m (x2 over 14m)      kubelet          Node multinode-600000-m03 status is now: NodeHasSufficientMemory
	I0507 19:55:41.057972    5068 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    14m (x2 over 14m)      kubelet          Node multinode-600000-m03 status is now: NodeHasNoDiskPressure
	I0507 19:55:41.057972    5068 command_runner.go:130] >   Normal  NodeHasSufficientPID     14m (x2 over 14m)      kubelet          Node multinode-600000-m03 status is now: NodeHasSufficientPID
	I0507 19:55:41.057972    5068 command_runner.go:130] >   Normal  NodeReady                14m                    kubelet          Node multinode-600000-m03 status is now: NodeReady
	I0507 19:55:41.058025    5068 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m16s (x2 over 5m16s)  kubelet          Node multinode-600000-m03 status is now: NodeHasSufficientMemory
	I0507 19:55:41.058025    5068 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m16s (x2 over 5m16s)  kubelet          Node multinode-600000-m03 status is now: NodeHasNoDiskPressure
	I0507 19:55:41.058062    5068 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m16s (x2 over 5m16s)  kubelet          Node multinode-600000-m03 status is now: NodeHasSufficientPID
	I0507 19:55:41.058062    5068 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m16s                  kubelet          Updated Node Allocatable limit across pods
	I0507 19:55:41.058062    5068 command_runner.go:130] >   Normal  RegisteredNode           5m13s                  node-controller  Node multinode-600000-m03 event: Registered Node multinode-600000-m03 in Controller
	I0507 19:55:41.058062    5068 command_runner.go:130] >   Normal  NodeReady                5m10s                  kubelet          Node multinode-600000-m03 status is now: NodeReady
	I0507 19:55:41.058149    5068 command_runner.go:130] >   Normal  NodeNotReady             3m43s                  node-controller  Node multinode-600000-m03 status is now: NodeNotReady
	I0507 19:55:41.058149    5068 command_runner.go:130] >   Normal  RegisteredNode           55s                    node-controller  Node multinode-600000-m03 event: Registered Node multinode-600000-m03 in Controller
	I0507 19:55:41.065898    5068 logs.go:123] Gathering logs for coredns [d27627c19808] ...
	I0507 19:55:41.065898    5068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d27627c19808"
	I0507 19:55:41.091399    5068 command_runner.go:130] > .:53
	I0507 19:55:41.091497    5068 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = a3820eb745a9a768a035bf81145ae0754aeb40457ffd5109db8c64dac842ada6c2edf6f9e6a410714e0f5cbc9cd90cb925a2fb37599adf58a40dc1bc5fa339b9
	I0507 19:55:41.091497    5068 command_runner.go:130] > CoreDNS-1.11.1
	I0507 19:55:41.091497    5068 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0507 19:55:41.091605    5068 command_runner.go:130] > [INFO] 127.0.0.1:50649 - 62527 "HINFO IN 8322179340745765625.4555534598598098973. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.052335947s
	I0507 19:55:41.091670    5068 logs.go:123] Gathering logs for coredns [9550b237d8d7] ...
	I0507 19:55:41.091670    5068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9550b237d8d7"
	I0507 19:55:41.119967    5068 command_runner.go:130] > .:53
	I0507 19:55:41.119967    5068 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = a3820eb745a9a768a035bf81145ae0754aeb40457ffd5109db8c64dac842ada6c2edf6f9e6a410714e0f5cbc9cd90cb925a2fb37599adf58a40dc1bc5fa339b9
	I0507 19:55:41.119967    5068 command_runner.go:130] > CoreDNS-1.11.1
	I0507 19:55:41.119967    5068 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0507 19:55:41.120166    5068 command_runner.go:130] > [INFO] 127.0.0.1:52654 - 36159 "HINFO IN 3626502665556373881.284047733441029162. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.030998756s
	I0507 19:55:41.120191    5068 command_runner.go:130] > [INFO] 10.244.1.2:39771 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00031622s
	I0507 19:55:41.120227    5068 command_runner.go:130] > [INFO] 10.244.1.2:55622 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.122912472s
	I0507 19:55:41.120227    5068 command_runner.go:130] > [INFO] 10.244.1.2:43817 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.066971198s
	I0507 19:55:41.120322    5068 command_runner.go:130] > [INFO] 10.244.1.2:39650 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.458807699s
	I0507 19:55:41.120380    5068 command_runner.go:130] > [INFO] 10.244.0.3:47684 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000164311s
	I0507 19:55:41.120380    5068 command_runner.go:130] > [INFO] 10.244.0.3:35317 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.00014611s
	I0507 19:55:41.120380    5068 command_runner.go:130] > [INFO] 10.244.0.3:42135 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000170411s
	I0507 19:55:41.120380    5068 command_runner.go:130] > [INFO] 10.244.0.3:41756 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000172612s
	I0507 19:55:41.120380    5068 command_runner.go:130] > [INFO] 10.244.1.2:40802 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000169011s
	I0507 19:55:41.120380    5068 command_runner.go:130] > [INFO] 10.244.1.2:55691 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.060031941s
	I0507 19:55:41.120380    5068 command_runner.go:130] > [INFO] 10.244.1.2:46687 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000212614s
	I0507 19:55:41.120380    5068 command_runner.go:130] > [INFO] 10.244.1.2:51698 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000276418s
	I0507 19:55:41.120380    5068 command_runner.go:130] > [INFO] 10.244.1.2:40943 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.014055822s
	I0507 19:55:41.120380    5068 command_runner.go:130] > [INFO] 10.244.1.2:55853 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000128309s
	I0507 19:55:41.120380    5068 command_runner.go:130] > [INFO] 10.244.1.2:34444 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000187212s
	I0507 19:55:41.120380    5068 command_runner.go:130] > [INFO] 10.244.1.2:54956 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000091106s
	I0507 19:55:41.120380    5068 command_runner.go:130] > [INFO] 10.244.0.3:37511 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00031542s
	I0507 19:55:41.120380    5068 command_runner.go:130] > [INFO] 10.244.0.3:47331 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000061304s
	I0507 19:55:41.120380    5068 command_runner.go:130] > [INFO] 10.244.0.3:36195 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000211814s
	I0507 19:55:41.120380    5068 command_runner.go:130] > [INFO] 10.244.0.3:37240 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00014531s
	I0507 19:55:41.120380    5068 command_runner.go:130] > [INFO] 10.244.0.3:56992 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.00014411s
	I0507 19:55:41.120380    5068 command_runner.go:130] > [INFO] 10.244.0.3:53922 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000127508s
	I0507 19:55:41.120380    5068 command_runner.go:130] > [INFO] 10.244.0.3:51034 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000225815s
	I0507 19:55:41.120380    5068 command_runner.go:130] > [INFO] 10.244.0.3:45123 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000130808s
	I0507 19:55:41.120916    5068 command_runner.go:130] > [INFO] 10.244.1.2:53185 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000190512s
	I0507 19:55:41.120916    5068 command_runner.go:130] > [INFO] 10.244.1.2:47331 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000056804s
	I0507 19:55:41.120916    5068 command_runner.go:130] > [INFO] 10.244.1.2:42551 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000058104s
	I0507 19:55:41.120986    5068 command_runner.go:130] > [INFO] 10.244.1.2:47860 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000057104s
	I0507 19:55:41.120986    5068 command_runner.go:130] > [INFO] 10.244.0.3:53037 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000190312s
	I0507 19:55:41.120986    5068 command_runner.go:130] > [INFO] 10.244.0.3:60613 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000143109s
	I0507 19:55:41.120986    5068 command_runner.go:130] > [INFO] 10.244.0.3:33867 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000069105s
	I0507 19:55:41.121072    5068 command_runner.go:130] > [INFO] 10.244.0.3:40289 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00014191s
	I0507 19:55:41.121072    5068 command_runner.go:130] > [INFO] 10.244.1.2:55673 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000204514s
	I0507 19:55:41.121072    5068 command_runner.go:130] > [INFO] 10.244.1.2:46474 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000132609s
	I0507 19:55:41.121152    5068 command_runner.go:130] > [INFO] 10.244.1.2:48070 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000170211s
	I0507 19:55:41.121250    5068 command_runner.go:130] > [INFO] 10.244.1.2:56147 - 5 "PTR IN 1.128.19.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000093806s
	I0507 19:55:41.121250    5068 command_runner.go:130] > [INFO] 10.244.0.3:39426 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000107507s
	I0507 19:55:41.121250    5068 command_runner.go:130] > [INFO] 10.244.0.3:42569 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000295619s
	I0507 19:55:41.121324    5068 command_runner.go:130] > [INFO] 10.244.0.3:56970 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000267917s
	I0507 19:55:41.121393    5068 command_runner.go:130] > [INFO] 10.244.0.3:55625 - 5 "PTR IN 1.128.19.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00014751s
	I0507 19:55:41.121393    5068 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0507 19:55:41.121457    5068 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0507 19:55:41.126160    5068 logs.go:123] Gathering logs for kube-proxy [5255a972ff6c] ...
	I0507 19:55:41.126160    5068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5255a972ff6c"
	I0507 19:55:41.148009    5068 command_runner.go:130] ! I0507 19:54:35.575583       1 server_linux.go:69] "Using iptables proxy"
	I0507 19:55:41.148009    5068 command_runner.go:130] ! I0507 19:54:35.605564       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.19.135.22"]
	I0507 19:55:41.148072    5068 command_runner.go:130] ! I0507 19:54:35.819515       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0507 19:55:41.148072    5068 command_runner.go:130] ! I0507 19:54:35.819549       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0507 19:55:41.148072    5068 command_runner.go:130] ! I0507 19:54:35.819565       1 server_linux.go:165] "Using iptables Proxier"
	I0507 19:55:41.148072    5068 command_runner.go:130] ! I0507 19:54:35.837879       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0507 19:55:41.148137    5068 command_runner.go:130] ! I0507 19:54:35.838133       1 server.go:872] "Version info" version="v1.30.0"
	I0507 19:55:41.148137    5068 command_runner.go:130] ! I0507 19:54:35.838147       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0507 19:55:41.148137    5068 command_runner.go:130] ! I0507 19:54:35.845888       1 config.go:192] "Starting service config controller"
	I0507 19:55:41.148137    5068 command_runner.go:130] ! I0507 19:54:35.848183       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0507 19:55:41.148199    5068 command_runner.go:130] ! I0507 19:54:35.848226       1 config.go:319] "Starting node config controller"
	I0507 19:55:41.148199    5068 command_runner.go:130] ! I0507 19:54:35.848406       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0507 19:55:41.148257    5068 command_runner.go:130] ! I0507 19:54:35.849079       1 config.go:101] "Starting endpoint slice config controller"
	I0507 19:55:41.148257    5068 command_runner.go:130] ! I0507 19:54:35.849088       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0507 19:55:41.148343    5068 command_runner.go:130] ! I0507 19:54:35.954590       1 shared_informer.go:320] Caches are synced for node config
	I0507 19:55:41.148343    5068 command_runner.go:130] ! I0507 19:54:35.954640       1 shared_informer.go:320] Caches are synced for service config
	I0507 19:55:41.148368    5068 command_runner.go:130] ! I0507 19:54:35.954677       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0507 19:55:41.149885    5068 logs.go:123] Gathering logs for kindnet [2d49ad078ed3] ...
	I0507 19:55:41.149971    5068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d49ad078ed3"
	I0507 19:55:41.182905    5068 command_runner.go:130] ! I0507 19:41:07.116810       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:41.182905    5068 command_runner.go:130] ! I0507 19:41:07.116911       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:41.182905    5068 command_runner.go:130] ! I0507 19:41:07.117095       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:41.182905    5068 command_runner.go:130] ! I0507 19:41:17.123472       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:41.182905    5068 command_runner.go:130] ! I0507 19:41:17.123573       1 main.go:227] handling current node
	I0507 19:55:41.182905    5068 command_runner.go:130] ! I0507 19:41:17.123585       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:41.182905    5068 command_runner.go:130] ! I0507 19:41:17.123594       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:41.182905    5068 command_runner.go:130] ! I0507 19:41:17.124084       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:41.182905    5068 command_runner.go:130] ! I0507 19:41:17.124175       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:41.182905    5068 command_runner.go:130] ! I0507 19:41:27.134971       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:41.182905    5068 command_runner.go:130] ! I0507 19:41:27.135112       1 main.go:227] handling current node
	I0507 19:55:41.182905    5068 command_runner.go:130] ! I0507 19:41:27.135127       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:41.182905    5068 command_runner.go:130] ! I0507 19:41:27.135135       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:41.182905    5068 command_runner.go:130] ! I0507 19:41:27.135337       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:41.182905    5068 command_runner.go:130] ! I0507 19:41:27.135391       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:41.182905    5068 command_runner.go:130] ! I0507 19:41:37.144428       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:41.182905    5068 command_runner.go:130] ! I0507 19:41:37.144529       1 main.go:227] handling current node
	I0507 19:55:41.182905    5068 command_runner.go:130] ! I0507 19:41:37.144541       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:41.182905    5068 command_runner.go:130] ! I0507 19:41:37.144549       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:41.182905    5068 command_runner.go:130] ! I0507 19:41:37.144673       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:41.182905    5068 command_runner.go:130] ! I0507 19:41:37.144698       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:41.182905    5068 command_runner.go:130] ! I0507 19:41:47.154405       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:41.182905    5068 command_runner.go:130] ! I0507 19:41:47.154529       1 main.go:227] handling current node
	I0507 19:55:41.182905    5068 command_runner.go:130] ! I0507 19:41:47.154543       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:41.182905    5068 command_runner.go:130] ! I0507 19:41:47.154551       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:41.182905    5068 command_runner.go:130] ! I0507 19:41:47.155068       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:41.183434    5068 command_runner.go:130] ! I0507 19:41:47.155088       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:41.183472    5068 command_runner.go:130] ! I0507 19:41:57.163844       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:41.183492    5068 command_runner.go:130] ! I0507 19:41:57.163910       1 main.go:227] handling current node
	I0507 19:55:41.183492    5068 command_runner.go:130] ! I0507 19:41:57.163920       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:41.183492    5068 command_runner.go:130] ! I0507 19:41:57.163926       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:41.183492    5068 command_runner.go:130] ! I0507 19:41:57.164061       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:41.183492    5068 command_runner.go:130] ! I0507 19:41:57.164070       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:41.183492    5068 command_runner.go:130] ! I0507 19:42:07.179518       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:41.183492    5068 command_runner.go:130] ! I0507 19:42:07.179623       1 main.go:227] handling current node
	I0507 19:55:41.183492    5068 command_runner.go:130] ! I0507 19:42:07.179635       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:41.183492    5068 command_runner.go:130] ! I0507 19:42:07.179643       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:41.183492    5068 command_runner.go:130] ! I0507 19:42:07.179805       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:41.183492    5068 command_runner.go:130] ! I0507 19:42:07.180030       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:41.183492    5068 command_runner.go:130] ! I0507 19:42:17.193528       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:41.183492    5068 command_runner.go:130] ! I0507 19:42:17.193636       1 main.go:227] handling current node
	I0507 19:55:41.183492    5068 command_runner.go:130] ! I0507 19:42:17.193649       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:41.183492    5068 command_runner.go:130] ! I0507 19:42:17.193657       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:41.183492    5068 command_runner.go:130] ! I0507 19:42:17.194171       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:41.183492    5068 command_runner.go:130] ! I0507 19:42:17.194408       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:41.183492    5068 command_runner.go:130] ! I0507 19:42:27.205877       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:41.183492    5068 command_runner.go:130] ! I0507 19:42:27.205918       1 main.go:227] handling current node
	I0507 19:55:41.183492    5068 command_runner.go:130] ! I0507 19:42:27.205929       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:41.183492    5068 command_runner.go:130] ! I0507 19:42:27.205936       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:41.183492    5068 command_runner.go:130] ! I0507 19:42:27.206343       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:41.183492    5068 command_runner.go:130] ! I0507 19:42:27.206360       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:41.183492    5068 command_runner.go:130] ! I0507 19:42:37.213680       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:41.183492    5068 command_runner.go:130] ! I0507 19:42:37.213766       1 main.go:227] handling current node
	I0507 19:55:41.183492    5068 command_runner.go:130] ! I0507 19:42:37.213780       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:41.183492    5068 command_runner.go:130] ! I0507 19:42:37.213788       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:41.183492    5068 command_runner.go:130] ! I0507 19:42:37.214204       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:41.183492    5068 command_runner.go:130] ! I0507 19:42:37.214303       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:41.183492    5068 command_runner.go:130] ! I0507 19:42:47.224946       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:41.183492    5068 command_runner.go:130] ! I0507 19:42:47.225125       1 main.go:227] handling current node
	I0507 19:55:41.184045    5068 command_runner.go:130] ! I0507 19:42:47.225139       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:41.184045    5068 command_runner.go:130] ! I0507 19:42:47.225148       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:41.184045    5068 command_runner.go:130] ! I0507 19:42:47.225499       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:41.184045    5068 command_runner.go:130] ! I0507 19:42:47.225556       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:41.184227    5068 command_runner.go:130] ! I0507 19:42:57.236504       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:41.184257    5068 command_runner.go:130] ! I0507 19:42:57.236681       1 main.go:227] handling current node
	I0507 19:55:41.184347    5068 command_runner.go:130] ! I0507 19:42:57.236699       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:41.184347    5068 command_runner.go:130] ! I0507 19:42:57.237025       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:41.184347    5068 command_runner.go:130] ! I0507 19:42:57.237359       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:41.184424    5068 command_runner.go:130] ! I0507 19:42:57.237385       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:41.184424    5068 command_runner.go:130] ! I0507 19:43:07.248420       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:41.184503    5068 command_runner.go:130] ! I0507 19:43:07.248600       1 main.go:227] handling current node
	I0507 19:55:41.184503    5068 command_runner.go:130] ! I0507 19:43:07.248614       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:41.184579    5068 command_runner.go:130] ! I0507 19:43:07.248622       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:41.184579    5068 command_runner.go:130] ! I0507 19:43:07.249108       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:41.184579    5068 command_runner.go:130] ! I0507 19:43:07.249189       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:41.184664    5068 command_runner.go:130] ! I0507 19:43:17.265021       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:41.184664    5068 command_runner.go:130] ! I0507 19:43:17.265056       1 main.go:227] handling current node
	I0507 19:55:41.184743    5068 command_runner.go:130] ! I0507 19:43:17.265067       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:41.184743    5068 command_runner.go:130] ! I0507 19:43:17.265074       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:41.184817    5068 command_runner.go:130] ! I0507 19:43:17.265713       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:41.184817    5068 command_runner.go:130] ! I0507 19:43:17.265780       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:41.184817    5068 command_runner.go:130] ! I0507 19:43:27.271270       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:41.184817    5068 command_runner.go:130] ! I0507 19:43:27.271308       1 main.go:227] handling current node
	I0507 19:55:41.184902    5068 command_runner.go:130] ! I0507 19:43:27.271320       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:41.184902    5068 command_runner.go:130] ! I0507 19:43:27.271326       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:41.184902    5068 command_runner.go:130] ! I0507 19:43:27.271684       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:41.184982    5068 command_runner.go:130] ! I0507 19:43:27.271715       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:41.184982    5068 command_runner.go:130] ! I0507 19:43:37.279223       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:41.184982    5068 command_runner.go:130] ! I0507 19:43:37.279323       1 main.go:227] handling current node
	I0507 19:55:41.184982    5068 command_runner.go:130] ! I0507 19:43:37.279336       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:41.184982    5068 command_runner.go:130] ! I0507 19:43:37.279344       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:41.185061    5068 command_runner.go:130] ! I0507 19:43:37.279894       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:41.185061    5068 command_runner.go:130] ! I0507 19:43:37.280039       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:41.185061    5068 command_runner.go:130] ! I0507 19:43:47.292160       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:41.185132    5068 command_runner.go:130] ! I0507 19:43:47.292257       1 main.go:227] handling current node
	I0507 19:55:41.185132    5068 command_runner.go:130] ! I0507 19:43:47.292269       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:41.185132    5068 command_runner.go:130] ! I0507 19:43:47.292276       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:41.185132    5068 command_runner.go:130] ! I0507 19:43:47.292451       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:41.185211    5068 command_runner.go:130] ! I0507 19:43:47.292531       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:41.185211    5068 command_runner.go:130] ! I0507 19:43:57.302957       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:41.185211    5068 command_runner.go:130] ! I0507 19:43:57.303129       1 main.go:227] handling current node
	I0507 19:55:41.185211    5068 command_runner.go:130] ! I0507 19:43:57.303144       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:41.185211    5068 command_runner.go:130] ! I0507 19:43:57.303152       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:41.185291    5068 command_runner.go:130] ! I0507 19:43:57.303598       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:41.185309    5068 command_runner.go:130] ! I0507 19:43:57.303754       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:41.185309    5068 command_runner.go:130] ! I0507 19:44:07.314533       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:41.185309    5068 command_runner.go:130] ! I0507 19:44:07.314565       1 main.go:227] handling current node
	I0507 19:55:41.185380    5068 command_runner.go:130] ! I0507 19:44:07.314575       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:41.185380    5068 command_runner.go:130] ! I0507 19:44:07.314581       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:41.185380    5068 command_runner.go:130] ! I0507 19:44:07.314878       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:41.185380    5068 command_runner.go:130] ! I0507 19:44:07.314965       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:41.185380    5068 command_runner.go:130] ! I0507 19:44:17.330535       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:41.186520    5068 command_runner.go:130] ! I0507 19:44:17.330644       1 main.go:227] handling current node
	I0507 19:55:41.186604    5068 command_runner.go:130] ! I0507 19:44:17.330657       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:41.186604    5068 command_runner.go:130] ! I0507 19:44:17.330665       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:41.186604    5068 command_runner.go:130] ! I0507 19:44:17.330781       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:41.186696    5068 command_runner.go:130] ! I0507 19:44:17.330805       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:41.186696    5068 command_runner.go:130] ! I0507 19:44:27.345226       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:41.186696    5068 command_runner.go:130] ! I0507 19:44:27.345325       1 main.go:227] handling current node
	I0507 19:55:41.186696    5068 command_runner.go:130] ! I0507 19:44:27.345338       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:41.186696    5068 command_runner.go:130] ! I0507 19:44:27.345346       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:41.186696    5068 command_runner.go:130] ! I0507 19:44:27.345594       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:41.186696    5068 command_runner.go:130] ! I0507 19:44:27.345661       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:41.186696    5068 command_runner.go:130] ! I0507 19:44:37.358952       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:41.186696    5068 command_runner.go:130] ! I0507 19:44:37.359029       1 main.go:227] handling current node
	I0507 19:55:41.186696    5068 command_runner.go:130] ! I0507 19:44:37.359041       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:41.186696    5068 command_runner.go:130] ! I0507 19:44:37.359049       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:41.186696    5068 command_runner.go:130] ! I0507 19:44:37.359583       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:41.186696    5068 command_runner.go:130] ! I0507 19:44:37.359942       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:41.186696    5068 command_runner.go:130] ! I0507 19:44:47.372236       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:41.186696    5068 command_runner.go:130] ! I0507 19:44:47.372327       1 main.go:227] handling current node
	I0507 19:55:41.186696    5068 command_runner.go:130] ! I0507 19:44:47.372340       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:41.186696    5068 command_runner.go:130] ! I0507 19:44:47.372347       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:41.186696    5068 command_runner.go:130] ! I0507 19:44:47.372619       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:41.186696    5068 command_runner.go:130] ! I0507 19:44:47.372773       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:41.186696    5068 command_runner.go:130] ! I0507 19:44:57.381408       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:41.186696    5068 command_runner.go:130] ! I0507 19:44:57.381561       1 main.go:227] handling current node
	I0507 19:55:41.186696    5068 command_runner.go:130] ! I0507 19:44:57.381575       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:41.186696    5068 command_runner.go:130] ! I0507 19:44:57.381583       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:41.186696    5068 command_runner.go:130] ! I0507 19:44:57.388779       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:41.186696    5068 command_runner.go:130] ! I0507 19:44:57.388820       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:41.186696    5068 command_runner.go:130] ! I0507 19:45:07.401501       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:41.187260    5068 command_runner.go:130] ! I0507 19:45:07.401539       1 main.go:227] handling current node
	I0507 19:55:41.187260    5068 command_runner.go:130] ! I0507 19:45:07.401551       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:41.187260    5068 command_runner.go:130] ! I0507 19:45:07.401558       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:41.187260    5068 command_runner.go:130] ! I0507 19:45:07.401946       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:41.187260    5068 command_runner.go:130] ! I0507 19:45:07.401971       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:41.187260    5068 command_runner.go:130] ! I0507 19:45:17.412152       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:41.187260    5068 command_runner.go:130] ! I0507 19:45:17.412194       1 main.go:227] handling current node
	I0507 19:55:41.187260    5068 command_runner.go:130] ! I0507 19:45:17.412205       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:41.187260    5068 command_runner.go:130] ! I0507 19:45:17.412546       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:41.187260    5068 command_runner.go:130] ! I0507 19:45:17.412831       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:41.187260    5068 command_runner.go:130] ! I0507 19:45:17.412948       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:41.187260    5068 command_runner.go:130] ! I0507 19:45:27.420776       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:41.187260    5068 command_runner.go:130] ! I0507 19:45:27.420889       1 main.go:227] handling current node
	I0507 19:55:41.187260    5068 command_runner.go:130] ! I0507 19:45:27.420901       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:41.187260    5068 command_runner.go:130] ! I0507 19:45:27.420910       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:41.187260    5068 command_runner.go:130] ! I0507 19:45:27.421607       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:41.187260    5068 command_runner.go:130] ! I0507 19:45:27.421717       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:41.187260    5068 command_runner.go:130] ! I0507 19:45:37.427913       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:41.187260    5068 command_runner.go:130] ! I0507 19:45:37.428076       1 main.go:227] handling current node
	I0507 19:55:41.187260    5068 command_runner.go:130] ! I0507 19:45:37.428090       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:41.187260    5068 command_runner.go:130] ! I0507 19:45:37.428099       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:41.187260    5068 command_runner.go:130] ! I0507 19:45:37.428614       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:41.187260    5068 command_runner.go:130] ! I0507 19:45:37.428647       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:41.187260    5068 command_runner.go:130] ! I0507 19:45:47.434296       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:41.187260    5068 command_runner.go:130] ! I0507 19:45:47.434399       1 main.go:227] handling current node
	I0507 19:55:41.187260    5068 command_runner.go:130] ! I0507 19:45:47.434412       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:41.187260    5068 command_runner.go:130] ! I0507 19:45:47.434420       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:41.187260    5068 command_runner.go:130] ! I0507 19:45:47.434745       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:41.187260    5068 command_runner.go:130] ! I0507 19:45:47.434773       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:41.187260    5068 command_runner.go:130] ! I0507 19:45:57.448460       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:41.187260    5068 command_runner.go:130] ! I0507 19:45:57.448499       1 main.go:227] handling current node
	I0507 19:55:41.187260    5068 command_runner.go:130] ! I0507 19:45:57.448510       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:41.187260    5068 command_runner.go:130] ! I0507 19:45:57.448517       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:41.187260    5068 command_runner.go:130] ! I0507 19:45:57.448949       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:41.187260    5068 command_runner.go:130] ! I0507 19:45:57.448981       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:41.187260    5068 command_runner.go:130] ! I0507 19:46:07.463804       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:41.187260    5068 command_runner.go:130] ! I0507 19:46:07.463844       1 main.go:227] handling current node
	I0507 19:55:41.187260    5068 command_runner.go:130] ! I0507 19:46:07.463855       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:41.187260    5068 command_runner.go:130] ! I0507 19:46:07.463863       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:41.187260    5068 command_runner.go:130] ! I0507 19:46:07.464346       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:41.187260    5068 command_runner.go:130] ! I0507 19:46:07.464378       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:41.187260    5068 command_runner.go:130] ! I0507 19:46:17.480817       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:41.187260    5068 command_runner.go:130] ! I0507 19:46:17.480973       1 main.go:227] handling current node
	I0507 19:55:41.187260    5068 command_runner.go:130] ! I0507 19:46:17.481017       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:41.187260    5068 command_runner.go:130] ! I0507 19:46:17.481027       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:41.187260    5068 command_runner.go:130] ! I0507 19:46:17.481217       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:41.187260    5068 command_runner.go:130] ! I0507 19:46:17.481364       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:41.187260    5068 command_runner.go:130] ! I0507 19:46:27.490098       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:41.187260    5068 command_runner.go:130] ! I0507 19:46:27.490193       1 main.go:227] handling current node
	I0507 19:55:41.187260    5068 command_runner.go:130] ! I0507 19:46:27.490207       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:41.187260    5068 command_runner.go:130] ! I0507 19:46:27.490215       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:41.187260    5068 command_runner.go:130] ! I0507 19:46:27.490319       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:41.187260    5068 command_runner.go:130] ! I0507 19:46:27.490331       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:41.187260    5068 command_runner.go:130] ! I0507 19:46:37.503127       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:41.187260    5068 command_runner.go:130] ! I0507 19:46:37.503153       1 main.go:227] handling current node
	I0507 19:55:41.187260    5068 command_runner.go:130] ! I0507 19:46:37.503164       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:41.187260    5068 command_runner.go:130] ! I0507 19:46:37.503171       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:41.187260    5068 command_runner.go:130] ! I0507 19:46:37.503279       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:41.187260    5068 command_runner.go:130] ! I0507 19:46:37.503286       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:41.187260    5068 command_runner.go:130] ! I0507 19:46:47.514408       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:41.188278    5068 command_runner.go:130] ! I0507 19:46:47.514504       1 main.go:227] handling current node
	I0507 19:55:41.188278    5068 command_runner.go:130] ! I0507 19:46:47.514516       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:41.188278    5068 command_runner.go:130] ! I0507 19:46:47.514524       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:41.188278    5068 command_runner.go:130] ! I0507 19:46:47.514650       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:41.188278    5068 command_runner.go:130] ! I0507 19:46:47.514661       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:41.188278    5068 command_runner.go:130] ! I0507 19:46:57.529281       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:41.188278    5068 command_runner.go:130] ! I0507 19:46:57.529381       1 main.go:227] handling current node
	I0507 19:55:41.188278    5068 command_runner.go:130] ! I0507 19:46:57.529394       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:41.188278    5068 command_runner.go:130] ! I0507 19:46:57.529402       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:41.188278    5068 command_runner.go:130] ! I0507 19:46:57.529689       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:41.188278    5068 command_runner.go:130] ! I0507 19:46:57.529898       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:41.188278    5068 command_runner.go:130] ! I0507 19:47:07.536805       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:41.188278    5068 command_runner.go:130] ! I0507 19:47:07.536841       1 main.go:227] handling current node
	I0507 19:55:41.188278    5068 command_runner.go:130] ! I0507 19:47:07.536852       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:41.188278    5068 command_runner.go:130] ! I0507 19:47:07.536859       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:41.188278    5068 command_runner.go:130] ! I0507 19:47:07.537080       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:41.188278    5068 command_runner.go:130] ! I0507 19:47:07.537103       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:41.188278    5068 command_runner.go:130] ! I0507 19:47:17.551699       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:41.188278    5068 command_runner.go:130] ! I0507 19:47:17.552050       1 main.go:227] handling current node
	I0507 19:55:41.188278    5068 command_runner.go:130] ! I0507 19:47:17.552126       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:41.188278    5068 command_runner.go:130] ! I0507 19:47:17.552206       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:41.188278    5068 command_runner.go:130] ! I0507 19:47:17.552600       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:41.188278    5068 command_runner.go:130] ! I0507 19:47:17.552777       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:41.188278    5068 command_runner.go:130] ! I0507 19:47:27.567122       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:41.188278    5068 command_runner.go:130] ! I0507 19:47:27.567214       1 main.go:227] handling current node
	I0507 19:55:41.188278    5068 command_runner.go:130] ! I0507 19:47:27.567227       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:41.188278    5068 command_runner.go:130] ! I0507 19:47:27.567251       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:41.188278    5068 command_runner.go:130] ! I0507 19:47:27.567365       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:41.188278    5068 command_runner.go:130] ! I0507 19:47:27.567376       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:41.188278    5068 command_runner.go:130] ! I0507 19:47:37.579248       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:41.188278    5068 command_runner.go:130] ! I0507 19:47:37.579334       1 main.go:227] handling current node
	I0507 19:55:41.188278    5068 command_runner.go:130] ! I0507 19:47:37.579346       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:41.188278    5068 command_runner.go:130] ! I0507 19:47:37.579352       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:41.188278    5068 command_runner.go:130] ! I0507 19:47:37.580168       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:41.188278    5068 command_runner.go:130] ! I0507 19:47:37.580202       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:41.188278    5068 command_runner.go:130] ! I0507 19:47:47.591084       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:41.188278    5068 command_runner.go:130] ! I0507 19:47:47.591125       1 main.go:227] handling current node
	I0507 19:55:41.188278    5068 command_runner.go:130] ! I0507 19:47:47.591136       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:41.188278    5068 command_runner.go:130] ! I0507 19:47:47.591143       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:41.188278    5068 command_runner.go:130] ! I0507 19:47:47.591350       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:41.188278    5068 command_runner.go:130] ! I0507 19:47:47.591365       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:41.188278    5068 command_runner.go:130] ! I0507 19:47:57.599687       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:41.188278    5068 command_runner.go:130] ! I0507 19:47:57.599780       1 main.go:227] handling current node
	I0507 19:55:41.188278    5068 command_runner.go:130] ! I0507 19:47:57.600282       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:41.188278    5068 command_runner.go:130] ! I0507 19:47:57.600376       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:41.188278    5068 command_runner.go:130] ! I0507 19:47:57.600829       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:41.188278    5068 command_runner.go:130] ! I0507 19:47:57.601089       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:41.188278    5068 command_runner.go:130] ! I0507 19:48:07.608877       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:41.188278    5068 command_runner.go:130] ! I0507 19:48:07.608973       1 main.go:227] handling current node
	I0507 19:55:41.188278    5068 command_runner.go:130] ! I0507 19:48:07.609012       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:41.188278    5068 command_runner.go:130] ! I0507 19:48:07.609021       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:41.188278    5068 command_runner.go:130] ! I0507 19:48:07.609341       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:41.188278    5068 command_runner.go:130] ! I0507 19:48:07.609437       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:41.188278    5068 command_runner.go:130] ! I0507 19:48:17.616839       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:41.188278    5068 command_runner.go:130] ! I0507 19:48:17.616948       1 main.go:227] handling current node
	I0507 19:55:41.188278    5068 command_runner.go:130] ! I0507 19:48:17.616962       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:41.188278    5068 command_runner.go:130] ! I0507 19:48:17.616970       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:41.188278    5068 command_runner.go:130] ! I0507 19:48:17.617201       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:41.188278    5068 command_runner.go:130] ! I0507 19:48:17.617302       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:41.188278    5068 command_runner.go:130] ! I0507 19:48:27.622610       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:41.188278    5068 command_runner.go:130] ! I0507 19:48:27.622773       1 main.go:227] handling current node
	I0507 19:55:41.188278    5068 command_runner.go:130] ! I0507 19:48:27.622786       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:41.188278    5068 command_runner.go:130] ! I0507 19:48:27.622794       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:41.188278    5068 command_runner.go:130] ! I0507 19:48:27.622907       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:41.188278    5068 command_runner.go:130] ! I0507 19:48:27.622913       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:41.189332    5068 command_runner.go:130] ! I0507 19:48:37.635466       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:41.189332    5068 command_runner.go:130] ! I0507 19:48:37.635567       1 main.go:227] handling current node
	I0507 19:55:41.189403    5068 command_runner.go:130] ! I0507 19:48:37.635581       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:41.189403    5068 command_runner.go:130] ! I0507 19:48:37.635588       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:41.189522    5068 command_runner.go:130] ! I0507 19:48:37.635708       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:41.189522    5068 command_runner.go:130] ! I0507 19:48:37.635731       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:41.189522    5068 command_runner.go:130] ! I0507 19:48:47.648680       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:41.189591    5068 command_runner.go:130] ! I0507 19:48:47.648719       1 main.go:227] handling current node
	I0507 19:55:41.189591    5068 command_runner.go:130] ! I0507 19:48:47.648730       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:41.189591    5068 command_runner.go:130] ! I0507 19:48:47.648736       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:41.189664    5068 command_runner.go:130] ! I0507 19:48:47.649047       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:41.189664    5068 command_runner.go:130] ! I0507 19:48:47.649073       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:41.189723    5068 command_runner.go:130] ! I0507 19:48:57.661624       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:41.189723    5068 command_runner.go:130] ! I0507 19:48:57.661723       1 main.go:227] handling current node
	I0507 19:55:41.189723    5068 command_runner.go:130] ! I0507 19:48:57.661736       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:41.189799    5068 command_runner.go:130] ! I0507 19:48:57.661745       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:41.189799    5068 command_runner.go:130] ! I0507 19:48:57.661906       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:41.189799    5068 command_runner.go:130] ! I0507 19:48:57.661973       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:41.189799    5068 command_runner.go:130] ! I0507 19:49:07.670042       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:41.189799    5068 command_runner.go:130] ! I0507 19:49:07.670434       1 main.go:227] handling current node
	I0507 19:55:41.189799    5068 command_runner.go:130] ! I0507 19:49:07.670598       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:41.189914    5068 command_runner.go:130] ! I0507 19:49:07.670611       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:41.189914    5068 command_runner.go:130] ! I0507 19:49:07.670874       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:41.189967    5068 command_runner.go:130] ! I0507 19:49:07.670892       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:41.189967    5068 command_runner.go:130] ! I0507 19:49:17.688752       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:41.190019    5068 command_runner.go:130] ! I0507 19:49:17.688862       1 main.go:227] handling current node
	I0507 19:55:41.190019    5068 command_runner.go:130] ! I0507 19:49:17.689132       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:41.190068    5068 command_runner.go:130] ! I0507 19:49:17.689148       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:41.190068    5068 command_runner.go:130] ! I0507 19:49:17.689445       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:41.190068    5068 command_runner.go:130] ! I0507 19:49:17.689461       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:41.190068    5068 command_runner.go:130] ! I0507 19:49:27.703795       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:41.190139    5068 command_runner.go:130] ! I0507 19:49:27.703825       1 main.go:227] handling current node
	I0507 19:55:41.190139    5068 command_runner.go:130] ! I0507 19:49:27.703838       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:41.190139    5068 command_runner.go:130] ! I0507 19:49:27.703846       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:41.190207    5068 command_runner.go:130] ! I0507 19:49:27.704329       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:41.190207    5068 command_runner.go:130] ! I0507 19:49:27.704365       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:41.190207    5068 command_runner.go:130] ! I0507 19:49:37.711372       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:41.190281    5068 command_runner.go:130] ! I0507 19:49:37.711497       1 main.go:227] handling current node
	I0507 19:55:41.190281    5068 command_runner.go:130] ! I0507 19:49:37.711514       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:41.190281    5068 command_runner.go:130] ! I0507 19:49:37.711524       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:41.190281    5068 command_runner.go:130] ! I0507 19:49:37.711882       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:41.190281    5068 command_runner.go:130] ! I0507 19:49:37.711917       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:41.190367    5068 command_runner.go:130] ! I0507 19:49:47.727743       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:41.190367    5068 command_runner.go:130] ! I0507 19:49:47.727786       1 main.go:227] handling current node
	I0507 19:55:41.190401    5068 command_runner.go:130] ! I0507 19:49:47.727798       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:41.190401    5068 command_runner.go:130] ! I0507 19:49:47.727806       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:41.190401    5068 command_runner.go:130] ! I0507 19:49:47.728278       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:41.190401    5068 command_runner.go:130] ! I0507 19:49:47.728401       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:41.190401    5068 command_runner.go:130] ! I0507 19:49:57.734796       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:41.190401    5068 command_runner.go:130] ! I0507 19:49:57.734892       1 main.go:227] handling current node
	I0507 19:55:41.190467    5068 command_runner.go:130] ! I0507 19:49:57.734905       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:41.190467    5068 command_runner.go:130] ! I0507 19:49:57.734913       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:41.190499    5068 command_runner.go:130] ! I0507 19:49:57.735055       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:41.190499    5068 command_runner.go:130] ! I0507 19:49:57.735077       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:41.190499    5068 command_runner.go:130] ! I0507 19:50:07.747486       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:41.190499    5068 command_runner.go:130] ! I0507 19:50:07.747598       1 main.go:227] handling current node
	I0507 19:55:41.190499    5068 command_runner.go:130] ! I0507 19:50:07.747612       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:41.190565    5068 command_runner.go:130] ! I0507 19:50:07.747621       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:41.190595    5068 command_runner.go:130] ! I0507 19:50:07.748185       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:41.190595    5068 command_runner.go:130] ! I0507 19:50:07.748222       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:41.190595    5068 command_runner.go:130] ! I0507 19:50:17.755602       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:41.190595    5068 command_runner.go:130] ! I0507 19:50:17.755761       1 main.go:227] handling current node
	I0507 19:55:41.190595    5068 command_runner.go:130] ! I0507 19:50:17.755774       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:41.190595    5068 command_runner.go:130] ! I0507 19:50:17.755782       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:41.190673    5068 command_runner.go:130] ! I0507 19:50:17.756227       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:41.190701    5068 command_runner.go:130] ! I0507 19:50:17.756267       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:41.190701    5068 command_runner.go:130] ! I0507 19:50:27.770562       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:41.190701    5068 command_runner.go:130] ! I0507 19:50:27.770678       1 main.go:227] handling current node
	I0507 19:55:41.190746    5068 command_runner.go:130] ! I0507 19:50:27.770692       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:41.190746    5068 command_runner.go:130] ! I0507 19:50:27.770700       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:41.190746    5068 command_runner.go:130] ! I0507 19:50:27.775735       1 main.go:223] Handling node with IPs: map[172.19.129.4:{}]
	I0507 19:55:41.190746    5068 command_runner.go:130] ! I0507 19:50:27.775767       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.3.0/24] 
	I0507 19:55:41.190796    5068 command_runner.go:130] ! I0507 19:50:27.775839       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.19.129.4 Flags: [] Table: 0} 
	I0507 19:55:41.190796    5068 command_runner.go:130] ! I0507 19:50:37.783936       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:41.190796    5068 command_runner.go:130] ! I0507 19:50:37.787174       1 main.go:227] handling current node
	I0507 19:55:41.190834    5068 command_runner.go:130] ! I0507 19:50:37.787394       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:41.190834    5068 command_runner.go:130] ! I0507 19:50:37.787449       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:41.190834    5068 command_runner.go:130] ! I0507 19:50:37.787687       1 main.go:223] Handling node with IPs: map[172.19.129.4:{}]
	I0507 19:55:41.190834    5068 command_runner.go:130] ! I0507 19:50:37.787791       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.3.0/24] 
	I0507 19:55:41.190834    5068 command_runner.go:130] ! I0507 19:50:47.804388       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:41.190891    5068 command_runner.go:130] ! I0507 19:50:47.804423       1 main.go:227] handling current node
	I0507 19:55:41.190891    5068 command_runner.go:130] ! I0507 19:50:47.804434       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:41.190891    5068 command_runner.go:130] ! I0507 19:50:47.804441       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:41.190891    5068 command_runner.go:130] ! I0507 19:50:47.805320       1 main.go:223] Handling node with IPs: map[172.19.129.4:{}]
	I0507 19:55:41.190891    5068 command_runner.go:130] ! I0507 19:50:47.805405       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.3.0/24] 
	I0507 19:55:41.190891    5068 command_runner.go:130] ! I0507 19:50:57.817550       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:41.190891    5068 command_runner.go:130] ! I0507 19:50:57.817645       1 main.go:227] handling current node
	I0507 19:55:41.190972    5068 command_runner.go:130] ! I0507 19:50:57.817660       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:41.190972    5068 command_runner.go:130] ! I0507 19:50:57.817668       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:41.191001    5068 command_runner.go:130] ! I0507 19:50:57.817802       1 main.go:223] Handling node with IPs: map[172.19.129.4:{}]
	I0507 19:55:41.191001    5068 command_runner.go:130] ! I0507 19:50:57.817829       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.3.0/24] 
	I0507 19:55:41.191001    5068 command_runner.go:130] ! I0507 19:51:07.829324       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:41.191001    5068 command_runner.go:130] ! I0507 19:51:07.829427       1 main.go:227] handling current node
	I0507 19:55:41.191052    5068 command_runner.go:130] ! I0507 19:51:07.829440       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:41.191052    5068 command_runner.go:130] ! I0507 19:51:07.829449       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:41.191052    5068 command_runner.go:130] ! I0507 19:51:07.829931       1 main.go:223] Handling node with IPs: map[172.19.129.4:{}]
	I0507 19:55:41.191052    5068 command_runner.go:130] ! I0507 19:51:07.830095       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.3.0/24] 
	I0507 19:55:41.191108    5068 command_runner.go:130] ! I0507 19:51:17.844953       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:41.191108    5068 command_runner.go:130] ! I0507 19:51:17.845032       1 main.go:227] handling current node
	I0507 19:55:41.191108    5068 command_runner.go:130] ! I0507 19:51:17.845046       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:41.191108    5068 command_runner.go:130] ! I0507 19:51:17.845128       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:41.191108    5068 command_runner.go:130] ! I0507 19:51:17.845337       1 main.go:223] Handling node with IPs: map[172.19.129.4:{}]
	I0507 19:55:41.191108    5068 command_runner.go:130] ! I0507 19:51:17.845367       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.3.0/24] 
	I0507 19:55:41.191177    5068 command_runner.go:130] ! I0507 19:51:27.851575       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:41.191177    5068 command_runner.go:130] ! I0507 19:51:27.851686       1 main.go:227] handling current node
	I0507 19:55:41.191209    5068 command_runner.go:130] ! I0507 19:51:27.851698       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:41.191209    5068 command_runner.go:130] ! I0507 19:51:27.851706       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:41.191307    5068 command_runner.go:130] ! I0507 19:51:27.852455       1 main.go:223] Handling node with IPs: map[172.19.129.4:{}]
	I0507 19:55:41.191332    5068 command_runner.go:130] ! I0507 19:51:27.852540       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.3.0/24] 
	I0507 19:55:41.191332    5068 command_runner.go:130] ! I0507 19:51:37.859761       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:41.191365    5068 command_runner.go:130] ! I0507 19:51:37.859857       1 main.go:227] handling current node
	I0507 19:55:41.191387    5068 command_runner.go:130] ! I0507 19:51:37.859871       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:41.191407    5068 command_runner.go:130] ! I0507 19:51:37.859930       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:41.191407    5068 command_runner.go:130] ! I0507 19:51:37.860319       1 main.go:223] Handling node with IPs: map[172.19.129.4:{}]
	I0507 19:55:41.191407    5068 command_runner.go:130] ! I0507 19:51:37.860413       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.3.0/24] 
	I0507 19:55:41.191407    5068 command_runner.go:130] ! I0507 19:51:47.872402       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:41.191407    5068 command_runner.go:130] ! I0507 19:51:47.872506       1 main.go:227] handling current node
	I0507 19:55:41.191407    5068 command_runner.go:130] ! I0507 19:51:47.872520       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:41.191407    5068 command_runner.go:130] ! I0507 19:51:47.872528       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:41.191407    5068 command_runner.go:130] ! I0507 19:51:47.872641       1 main.go:223] Handling node with IPs: map[172.19.129.4:{}]
	I0507 19:55:41.191407    5068 command_runner.go:130] ! I0507 19:51:47.872692       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.3.0/24] 
	I0507 19:55:41.191407    5068 command_runner.go:130] ! I0507 19:51:57.885508       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:41.191407    5068 command_runner.go:130] ! I0507 19:51:57.885541       1 main.go:227] handling current node
	I0507 19:55:41.191407    5068 command_runner.go:130] ! I0507 19:51:57.885551       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:41.191407    5068 command_runner.go:130] ! I0507 19:51:57.885556       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:41.191407    5068 command_runner.go:130] ! I0507 19:51:57.885664       1 main.go:223] Handling node with IPs: map[172.19.129.4:{}]
	I0507 19:55:41.191407    5068 command_runner.go:130] ! I0507 19:51:57.885730       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.3.0/24] 
	I0507 19:55:41.191407    5068 command_runner.go:130] ! I0507 19:52:07.898773       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:41.191407    5068 command_runner.go:130] ! I0507 19:52:07.899054       1 main.go:227] handling current node
	I0507 19:55:41.191407    5068 command_runner.go:130] ! I0507 19:52:07.899142       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:41.191407    5068 command_runner.go:130] ! I0507 19:52:07.899258       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:41.191407    5068 command_runner.go:130] ! I0507 19:52:07.899556       1 main.go:223] Handling node with IPs: map[172.19.129.4:{}]
	I0507 19:55:41.191407    5068 command_runner.go:130] ! I0507 19:52:07.899651       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.3.0/24] 
	I0507 19:55:43.721434    5068 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0507 19:55:43.745344    5068 command_runner.go:130] > 1882
	I0507 19:55:43.745344    5068 api_server.go:72] duration metric: took 1m5.6791172s to wait for apiserver process to appear ...
	I0507 19:55:43.745344    5068 api_server.go:88] waiting for apiserver healthz status ...
	I0507 19:55:43.752170    5068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 19:55:43.775189    5068 command_runner.go:130] > 7c95e3addc4b
	I0507 19:55:43.775279    5068 logs.go:276] 1 containers: [7c95e3addc4b]
	I0507 19:55:43.784442    5068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 19:55:43.804592    5068 command_runner.go:130] > ac320a872e77
	I0507 19:55:43.804592    5068 logs.go:276] 1 containers: [ac320a872e77]
	I0507 19:55:43.815252    5068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 19:55:43.833988    5068 command_runner.go:130] > d27627c19808
	I0507 19:55:43.833988    5068 command_runner.go:130] > 9550b237d8d7
	I0507 19:55:43.834049    5068 logs.go:276] 2 containers: [d27627c19808 9550b237d8d7]
	I0507 19:55:43.840539    5068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 19:55:43.859764    5068 command_runner.go:130] > 45341720d5be
	I0507 19:55:43.859825    5068 command_runner.go:130] > 7cefdac2050f
	I0507 19:55:43.859825    5068 logs.go:276] 2 containers: [45341720d5be 7cefdac2050f]
	I0507 19:55:43.868249    5068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 19:55:43.888848    5068 command_runner.go:130] > 5255a972ff6c
	I0507 19:55:43.889494    5068 command_runner.go:130] > aa9692c1fbd3
	I0507 19:55:43.889577    5068 logs.go:276] 2 containers: [5255a972ff6c aa9692c1fbd3]
	I0507 19:55:43.895450    5068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 19:55:43.914504    5068 command_runner.go:130] > 922d1e2b8745
	I0507 19:55:43.914870    5068 command_runner.go:130] > 3067f16e2e38
	I0507 19:55:43.915223    5068 logs.go:276] 2 containers: [922d1e2b8745 3067f16e2e38]
	I0507 19:55:43.921423    5068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 19:55:43.941571    5068 command_runner.go:130] > 29b5cae0b8f1
	I0507 19:55:43.942521    5068 command_runner.go:130] > 2d49ad078ed3
	I0507 19:55:43.942960    5068 logs.go:276] 2 containers: [29b5cae0b8f1 2d49ad078ed3]
	I0507 19:55:43.943068    5068 logs.go:123] Gathering logs for kubelet ...
	I0507 19:55:43.943068    5068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 19:55:43.975923    5068 command_runner.go:130] > May 07 19:54:25 multinode-600000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0507 19:55:43.975988    5068 command_runner.go:130] > May 07 19:54:25 multinode-600000 kubelet[1385]: I0507 19:54:25.312690    1385 server.go:484] "Kubelet version" kubeletVersion="v1.30.0"
	I0507 19:55:43.975988    5068 command_runner.go:130] > May 07 19:54:25 multinode-600000 kubelet[1385]: I0507 19:54:25.313053    1385 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0507 19:55:43.975988    5068 command_runner.go:130] > May 07 19:54:25 multinode-600000 kubelet[1385]: I0507 19:54:25.314038    1385 server.go:927] "Client rotation is on, will bootstrap in background"
	I0507 19:55:43.975988    5068 command_runner.go:130] > May 07 19:54:25 multinode-600000 kubelet[1385]: E0507 19:54:25.314980    1385 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0507 19:55:43.975988    5068 command_runner.go:130] > May 07 19:54:25 multinode-600000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0507 19:55:43.975988    5068 command_runner.go:130] > May 07 19:54:25 multinode-600000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0507 19:55:43.975988    5068 command_runner.go:130] > May 07 19:54:25 multinode-600000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0507 19:55:43.975988    5068 command_runner.go:130] > May 07 19:54:25 multinode-600000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0507 19:55:43.975988    5068 command_runner.go:130] > May 07 19:54:25 multinode-600000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0507 19:55:43.975988    5068 command_runner.go:130] > May 07 19:54:26 multinode-600000 kubelet[1417]: I0507 19:54:26.032056    1417 server.go:484] "Kubelet version" kubeletVersion="v1.30.0"
	I0507 19:55:43.975988    5068 command_runner.go:130] > May 07 19:54:26 multinode-600000 kubelet[1417]: I0507 19:54:26.032321    1417 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0507 19:55:43.975988    5068 command_runner.go:130] > May 07 19:54:26 multinode-600000 kubelet[1417]: I0507 19:54:26.032668    1417 server.go:927] "Client rotation is on, will bootstrap in background"
	I0507 19:55:43.975988    5068 command_runner.go:130] > May 07 19:54:26 multinode-600000 kubelet[1417]: E0507 19:54:26.032817    1417 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0507 19:55:43.975988    5068 command_runner.go:130] > May 07 19:54:26 multinode-600000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0507 19:55:43.975988    5068 command_runner.go:130] > May 07 19:54:26 multinode-600000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0507 19:55:43.975988    5068 command_runner.go:130] > May 07 19:54:26 multinode-600000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2.
	I0507 19:55:43.975988    5068 command_runner.go:130] > May 07 19:54:26 multinode-600000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0507 19:55:43.975988    5068 command_runner.go:130] > May 07 19:54:26 multinode-600000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0507 19:55:43.976573    5068 command_runner.go:130] > May 07 19:54:26 multinode-600000 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	I0507 19:55:43.976618    5068 command_runner.go:130] > May 07 19:54:26 multinode-600000 systemd[1]: kubelet.service: Deactivated successfully.
	I0507 19:55:43.976618    5068 command_runner.go:130] > May 07 19:54:26 multinode-600000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0507 19:55:43.976661    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0507 19:55:43.976741    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.682448    1526 server.go:484] "Kubelet version" kubeletVersion="v1.30.0"
	I0507 19:55:43.976792    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.683051    1526 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0507 19:55:43.976835    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.683318    1526 server.go:927] "Client rotation is on, will bootstrap in background"
	I0507 19:55:43.976879    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.685208    1526 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0507 19:55:43.976920    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.694353    1526 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0507 19:55:43.977019    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.719318    1526 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0507 19:55:43.977063    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.719480    1526 server.go:810] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I0507 19:55:43.977147    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.720216    1526 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0507 19:55:43.977282    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.720309    1526 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"multinode-600000","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"Top
ologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
	I0507 19:55:43.977365    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.720926    1526 topology_manager.go:138] "Creating topology manager with none policy"
	I0507 19:55:43.977410    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.721001    1526 container_manager_linux.go:301] "Creating device plugin manager"
	I0507 19:55:43.977454    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.721416    1526 state_mem.go:36] "Initialized new in-memory state store"
	I0507 19:55:43.977500    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.723173    1526 kubelet.go:400] "Attempting to sync node with API server"
	I0507 19:55:43.977588    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.723253    1526 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0507 19:55:43.977632    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.723313    1526 kubelet.go:312] "Adding apiserver pod source"
	I0507 19:55:43.977677    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.723974    1526 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0507 19:55:43.977766    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: W0507 19:54:28.726787    1526 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-600000&limit=500&resourceVersion=0": dial tcp 172.19.135.22:8443: connect: connection refused
	I0507 19:55:43.977855    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: E0507 19:54:28.726939    1526 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-600000&limit=500&resourceVersion=0": dial tcp 172.19.135.22:8443: connect: connection refused
	I0507 19:55:43.977912    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.731381    1526 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="docker" version="26.0.2" apiVersion="v1"
	I0507 19:55:43.977996    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.733269    1526 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I0507 19:55:43.978034    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: W0507 19:54:28.734851    1526 probe.go:272] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0507 19:55:43.978118    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.736816    1526 server.go:1264] "Started kubelet"
	I0507 19:55:43.978202    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: W0507 19:54:28.737228    1526 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.19.135.22:8443: connect: connection refused
	I0507 19:55:43.978373    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: E0507 19:54:28.737335    1526 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.19.135.22:8443: connect: connection refused
	I0507 19:55:43.978418    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.738410    1526 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
	I0507 19:55:43.978457    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.740846    1526 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I0507 19:55:43.978493    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.742005    1526 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0507 19:55:43.978632    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: E0507 19:54:28.742309    1526 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.19.135.22:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-600000.17cd4cf9c52f26de  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-600000,UID:multinode-600000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-600000,},FirstTimestamp:2024-05-07 19:54:28.736796382 +0000 UTC m=+0.138302022,LastTimestamp:2024-05-07 19:54:28.736796382 +0000 UTC m=+0.138302022,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-60
0000,}"
	I0507 19:55:43.978714    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.743118    1526 server.go:455] "Adding debug handlers to kubelet server"
	I0507 19:55:43.978796    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.749839    1526 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0507 19:55:43.978840    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.768561    1526 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
	I0507 19:55:43.978929    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: W0507 19:54:28.769072    1526 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.19.135.22:8443: connect: connection refused
	I0507 19:55:43.979012    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: E0507 19:54:28.769183    1526 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.19.135.22:8443: connect: connection refused
	I0507 19:55:43.979095    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.769400    1526 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I0507 19:55:43.979183    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.769456    1526 factory.go:221] Registration of the systemd container factory successfully
	I0507 19:55:43.979222    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.770894    1526 factory.go:219] Registration of the crio container factory failed: Get "http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)crio%!F(MISSING)crio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I0507 19:55:43.979304    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.772962    1526 volume_manager.go:291] "Starting Kubelet Volume Manager"
	I0507 19:55:43.979393    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: E0507 19:54:28.785539    1526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-600000?timeout=10s\": dial tcp 172.19.135.22:8443: connect: connection refused" interval="200ms"
	I0507 19:55:43.979437    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.791725    1526 reconciler.go:26] "Reconciler: start to sync state"
	I0507 19:55:43.979482    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.830988    1526 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0507 19:55:43.979526    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.840813    1526 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0507 19:55:43.979607    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.840916    1526 status_manager.go:217] "Starting to sync pod status with apiserver"
	I0507 19:55:43.979669    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.841140    1526 kubelet.go:2337] "Starting kubelet main sync loop"
	I0507 19:55:43.979669    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: E0507 19:54:28.841245    1526 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
	I0507 19:55:43.979708    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: W0507 19:54:28.856981    1526 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.19.135.22:8443: connect: connection refused
	I0507 19:55:43.979746    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: E0507 19:54:28.857107    1526 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.19.135.22:8443: connect: connection refused
	I0507 19:55:43.979784    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: E0507 19:54:28.863787    1526 iptables.go:577] "Could not set up iptables canary" err=<
	I0507 19:55:43.979822    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0507 19:55:43.979822    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0507 19:55:43.979860    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0507 19:55:43.979860    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0507 19:55:43.979895    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.867313    1526 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I0507 19:55:43.979895    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.867334    1526 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I0507 19:55:43.979933    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.867353    1526 state_mem.go:36] "Initialized new in-memory state store"
	I0507 19:55:43.979933    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.867956    1526 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0507 19:55:43.979972    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.867975    1526 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0507 19:55:43.979972    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.868003    1526 policy_none.go:49] "None policy: Start"
	I0507 19:55:43.980011    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.868488    1526 kubelet_node_status.go:73] "Attempting to register node" node="multinode-600000"
	I0507 19:55:43.980011    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: E0507 19:54:28.869266    1526 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.19.135.22:8443: connect: connection refused" node="multinode-600000"
	I0507 19:55:43.980050    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.874219    1526 memory_manager.go:170] "Starting memorymanager" policy="None"
	I0507 19:55:43.980050    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.874241    1526 state_mem.go:35] "Initializing new in-memory state store"
	I0507 19:55:43.980088    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.875298    1526 state_mem.go:75] "Updated machine memory state"
	I0507 19:55:43.980088    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.878167    1526 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0507 19:55:43.980206    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.878458    1526 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I0507 19:55:43.980206    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.880352    1526 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0507 19:55:43.980245    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: E0507 19:54:28.881798    1526 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-600000\" not found"
	I0507 19:55:43.980283    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.941803    1526 topology_manager.go:215] "Topology Admit Handler" podUID="cd9cba8f94818776ec6d8836322192b3" podNamespace="kube-system" podName="kube-apiserver-multinode-600000"
	I0507 19:55:43.980322    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.944197    1526 topology_manager.go:215] "Topology Admit Handler" podUID="f5d6aa60dc93b5e562f37ed2236c3022" podNamespace="kube-system" podName="kube-controller-manager-multinode-600000"
	I0507 19:55:43.980357    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.945407    1526 topology_manager.go:215] "Topology Admit Handler" podUID="7c4ee79f6d4f6adb00b636f817445fef" podNamespace="kube-system" podName="kube-scheduler-multinode-600000"
	I0507 19:55:43.980357    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.946291    1526 topology_manager.go:215] "Topology Admit Handler" podUID="1581bf6b00d338797c8fb8b10b74abde" podNamespace="kube-system" podName="etcd-multinode-600000"
	I0507 19:55:43.980395    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.947956    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86921e7643746441a6e93f7fb6fecdf7c7bf46b090192f2fc398129fad83dd9d"
	I0507 19:55:43.980433    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.947978    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70cff02905e8f07315ff7e01ce388c0da3246f3c03bb7c785b3b7979a31852a9"
	I0507 19:55:43.980471    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.948141    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58ebd877d77fb0eee19924ed195f0ccced541015095c32b9d58ab78831543622"
	I0507 19:55:43.980471    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.948156    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75f27faec2ed6996286f7030cea68f26137cea7abaedede628d29933fbde0ae9"
	I0507 19:55:43.980505    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.959165    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99af61c6e282aa13c7209e469e5e354f24968796fc455a65fdf2e8611f760994"
	I0507 19:55:43.980544    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.970524    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57950c0fdcbe4c7e6d3490c6477c947eac153e908d8e81090ef8205a050bb14c"
	I0507 19:55:43.980583    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: E0507 19:54:28.987462    1526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-600000?timeout=10s\": dial tcp 172.19.135.22:8443: connect: connection refused" interval="400ms"
	I0507 19:55:43.980583    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.989236    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca0d420373470a8f3b23bd3c9b5c59f5e7c4896da57782b69f9498d3ff333fb5"
	I0507 19:55:43.980621    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 kubelet[1526]: I0507 19:54:29.000822    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4afb10dc8b11575b4eaa25a6b283141c6e029c9b44d3db3a69e4c934171b778e"
	I0507 19:55:43.980691    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 kubelet[1526]: I0507 19:54:29.010098    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cd9cba8f94818776ec6d8836322192b3-k8s-certs\") pod \"kube-apiserver-multinode-600000\" (UID: \"cd9cba8f94818776ec6d8836322192b3\") " pod="kube-system/kube-apiserver-multinode-600000"
	I0507 19:55:43.980723    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 kubelet[1526]: I0507 19:54:29.010146    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f5d6aa60dc93b5e562f37ed2236c3022-flexvolume-dir\") pod \"kube-controller-manager-multinode-600000\" (UID: \"f5d6aa60dc93b5e562f37ed2236c3022\") " pod="kube-system/kube-controller-manager-multinode-600000"
	I0507 19:55:43.980780    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 kubelet[1526]: I0507 19:54:29.010167    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f5d6aa60dc93b5e562f37ed2236c3022-kubeconfig\") pod \"kube-controller-manager-multinode-600000\" (UID: \"f5d6aa60dc93b5e562f37ed2236c3022\") " pod="kube-system/kube-controller-manager-multinode-600000"
	I0507 19:55:43.980780    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 kubelet[1526]: I0507 19:54:29.010187    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c4ee79f6d4f6adb00b636f817445fef-kubeconfig\") pod \"kube-scheduler-multinode-600000\" (UID: \"7c4ee79f6d4f6adb00b636f817445fef\") " pod="kube-system/kube-scheduler-multinode-600000"
	I0507 19:55:43.980780    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 kubelet[1526]: I0507 19:54:29.010223    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/1581bf6b00d338797c8fb8b10b74abde-etcd-certs\") pod \"etcd-multinode-600000\" (UID: \"1581bf6b00d338797c8fb8b10b74abde\") " pod="kube-system/etcd-multinode-600000"
	I0507 19:55:43.980780    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 kubelet[1526]: I0507 19:54:29.010245    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cd9cba8f94818776ec6d8836322192b3-ca-certs\") pod \"kube-apiserver-multinode-600000\" (UID: \"cd9cba8f94818776ec6d8836322192b3\") " pod="kube-system/kube-apiserver-multinode-600000"
	I0507 19:55:43.980780    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 kubelet[1526]: I0507 19:54:29.010264    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f5d6aa60dc93b5e562f37ed2236c3022-ca-certs\") pod \"kube-controller-manager-multinode-600000\" (UID: \"f5d6aa60dc93b5e562f37ed2236c3022\") " pod="kube-system/kube-controller-manager-multinode-600000"
	I0507 19:55:43.980780    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 kubelet[1526]: I0507 19:54:29.010292    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f5d6aa60dc93b5e562f37ed2236c3022-k8s-certs\") pod \"kube-controller-manager-multinode-600000\" (UID: \"f5d6aa60dc93b5e562f37ed2236c3022\") " pod="kube-system/kube-controller-manager-multinode-600000"
	I0507 19:55:43.980780    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 kubelet[1526]: I0507 19:54:29.010323    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f5d6aa60dc93b5e562f37ed2236c3022-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-600000\" (UID: \"f5d6aa60dc93b5e562f37ed2236c3022\") " pod="kube-system/kube-controller-manager-multinode-600000"
	I0507 19:55:43.980780    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 kubelet[1526]: I0507 19:54:29.010365    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/1581bf6b00d338797c8fb8b10b74abde-etcd-data\") pod \"etcd-multinode-600000\" (UID: \"1581bf6b00d338797c8fb8b10b74abde\") " pod="kube-system/etcd-multinode-600000"
	I0507 19:55:43.980780    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 kubelet[1526]: I0507 19:54:29.010413    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cd9cba8f94818776ec6d8836322192b3-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-600000\" (UID: \"cd9cba8f94818776ec6d8836322192b3\") " pod="kube-system/kube-apiserver-multinode-600000"
	I0507 19:55:43.980780    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 kubelet[1526]: I0507 19:54:29.013343    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af16a92d7c1cc8f0246bdad95c9e580f729470ea118e03dce721c77127d06f56"
	I0507 19:55:43.980780    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 kubelet[1526]: I0507 19:54:29.071582    1526 kubelet_node_status.go:73] "Attempting to register node" node="multinode-600000"
	I0507 19:55:43.980780    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 kubelet[1526]: E0507 19:54:29.072513    1526 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.19.135.22:8443: connect: connection refused" node="multinode-600000"
	I0507 19:55:43.980780    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 kubelet[1526]: E0507 19:54:29.389792    1526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-600000?timeout=10s\": dial tcp 172.19.135.22:8443: connect: connection refused" interval="800ms"
	I0507 19:55:43.980780    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 kubelet[1526]: I0507 19:54:29.474674    1526 kubelet_node_status.go:73] "Attempting to register node" node="multinode-600000"
	I0507 19:55:43.980780    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 kubelet[1526]: E0507 19:54:29.475643    1526 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.19.135.22:8443: connect: connection refused" node="multinode-600000"
	I0507 19:55:43.980780    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 kubelet[1526]: W0507 19:54:29.564966    1526 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.19.135.22:8443: connect: connection refused
	I0507 19:55:43.980780    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 kubelet[1526]: E0507 19:54:29.565028    1526 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.19.135.22:8443: connect: connection refused
	I0507 19:55:43.980780    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 kubelet[1526]: W0507 19:54:29.712836    1526 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.19.135.22:8443: connect: connection refused
	I0507 19:55:43.981304    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 kubelet[1526]: E0507 19:54:29.712892    1526 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.19.135.22:8443: connect: connection refused
	I0507 19:55:43.981341    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 kubelet[1526]: W0507 19:54:29.898338    1526 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.19.135.22:8443: connect: connection refused
	I0507 19:55:43.981380    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 kubelet[1526]: E0507 19:54:29.898478    1526 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.19.135.22:8443: connect: connection refused
	I0507 19:55:43.981410    5068 command_runner.go:130] > May 07 19:54:30 multinode-600000 kubelet[1526]: W0507 19:54:30.187733    1526 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-600000&limit=500&resourceVersion=0": dial tcp 172.19.135.22:8443: connect: connection refused
	I0507 19:55:43.981410    5068 command_runner.go:130] > May 07 19:54:30 multinode-600000 kubelet[1526]: E0507 19:54:30.187857    1526 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-600000&limit=500&resourceVersion=0": dial tcp 172.19.135.22:8443: connect: connection refused
	I0507 19:55:43.981410    5068 command_runner.go:130] > May 07 19:54:30 multinode-600000 kubelet[1526]: E0507 19:54:30.195864    1526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-600000?timeout=10s\": dial tcp 172.19.135.22:8443: connect: connection refused" interval="1.6s"
	I0507 19:55:43.981410    5068 command_runner.go:130] > May 07 19:54:30 multinode-600000 kubelet[1526]: I0507 19:54:30.277090    1526 kubelet_node_status.go:73] "Attempting to register node" node="multinode-600000"
	I0507 19:55:43.981410    5068 command_runner.go:130] > May 07 19:54:30 multinode-600000 kubelet[1526]: E0507 19:54:30.278121    1526 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.19.135.22:8443: connect: connection refused" node="multinode-600000"
	I0507 19:55:43.981410    5068 command_runner.go:130] > May 07 19:54:31 multinode-600000 kubelet[1526]: I0507 19:54:31.880610    1526 kubelet_node_status.go:73] "Attempting to register node" node="multinode-600000"
	I0507 19:55:43.981410    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: I0507 19:54:33.731174    1526 apiserver.go:52] "Watching apiserver"
	I0507 19:55:43.981410    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: I0507 19:54:33.747542    1526 topology_manager.go:215] "Topology Admit Handler" podUID="d067d438-f4af-42e8-930d-3423a3ac211f" podNamespace="kube-system" podName="coredns-7db6d8ff4d-5j966"
	I0507 19:55:43.981410    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: I0507 19:54:33.747825    1526 topology_manager.go:215] "Topology Admit Handler" podUID="9a39807c-6243-4aa2-86f4-8626031c80a6" podNamespace="kube-system" podName="kube-proxy-c9gw5"
	I0507 19:55:43.981410    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: I0507 19:54:33.748122    1526 topology_manager.go:215] "Topology Admit Handler" podUID="b5145a4d-38aa-426e-947f-3480e269470e" podNamespace="kube-system" podName="kindnet-zw4r9"
	I0507 19:55:43.981410    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: I0507 19:54:33.748365    1526 topology_manager.go:215] "Topology Admit Handler" podUID="90142b77-53fb-42e1-94f8-7f8a3c7765ac" podNamespace="kube-system" podName="storage-provisioner"
	I0507 19:55:43.981410    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: I0507 19:54:33.748551    1526 topology_manager.go:215] "Topology Admit Handler" podUID="d98009ce-3495-481a-86b3-7c1e9422ca5a" podNamespace="default" podName="busybox-fc5497c4f-gcqlv"
	I0507 19:55:43.981410    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: E0507 19:54:33.749095    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-gcqlv" podUID="d98009ce-3495-481a-86b3-7c1e9422ca5a"
	I0507 19:55:43.981410    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: I0507 19:54:33.750550    1526 kubelet.go:1908] "Trying to delete pod" pod="kube-system/etcd-multinode-600000" podUID="d55601ee-11f4-432c-8170-ecc4d8212782"
	I0507 19:55:43.981410    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: E0507 19:54:33.750908    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-5j966" podUID="d067d438-f4af-42e8-930d-3423a3ac211f"
	I0507 19:55:43.981410    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: I0507 19:54:33.770134    1526 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	I0507 19:55:43.981410    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: I0507 19:54:33.810065    1526 kubelet_node_status.go:112] "Node was previously registered" node="multinode-600000"
	I0507 19:55:43.981410    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: I0507 19:54:33.810163    1526 kubelet_node_status.go:76] "Successfully registered node" node="multinode-600000"
	I0507 19:55:43.981410    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: I0507 19:54:33.818444    1526 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0507 19:55:43.981410    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: I0507 19:54:33.819648    1526 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0507 19:55:43.981410    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: I0507 19:54:33.820321    1526 setters.go:580] "Node became not ready" node="multinode-600000" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-05-07T19:54:33Z","lastTransitionTime":"2024-05-07T19:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0507 19:55:43.981410    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: I0507 19:54:33.837252    1526 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-600000"
	I0507 19:55:43.981932    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: I0507 19:54:33.845847    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9a39807c-6243-4aa2-86f4-8626031c80a6-lib-modules\") pod \"kube-proxy-c9gw5\" (UID: \"9a39807c-6243-4aa2-86f4-8626031c80a6\") " pod="kube-system/kube-proxy-c9gw5"
	I0507 19:55:43.981968    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: I0507 19:54:33.845991    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b5145a4d-38aa-426e-947f-3480e269470e-xtables-lock\") pod \"kindnet-zw4r9\" (UID: \"b5145a4d-38aa-426e-947f-3480e269470e\") " pod="kube-system/kindnet-zw4r9"
	I0507 19:55:43.982001    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: I0507 19:54:33.846149    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b5145a4d-38aa-426e-947f-3480e269470e-lib-modules\") pod \"kindnet-zw4r9\" (UID: \"b5145a4d-38aa-426e-947f-3480e269470e\") " pod="kube-system/kindnet-zw4r9"
	I0507 19:55:43.982001    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: I0507 19:54:33.846211    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/90142b77-53fb-42e1-94f8-7f8a3c7765ac-tmp\") pod \"storage-provisioner\" (UID: \"90142b77-53fb-42e1-94f8-7f8a3c7765ac\") " pod="kube-system/storage-provisioner"
	I0507 19:55:43.982001    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: I0507 19:54:33.846289    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/b5145a4d-38aa-426e-947f-3480e269470e-cni-cfg\") pod \"kindnet-zw4r9\" (UID: \"b5145a4d-38aa-426e-947f-3480e269470e\") " pod="kube-system/kindnet-zw4r9"
	I0507 19:55:43.982001    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: I0507 19:54:33.846373    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9a39807c-6243-4aa2-86f4-8626031c80a6-xtables-lock\") pod \"kube-proxy-c9gw5\" (UID: \"9a39807c-6243-4aa2-86f4-8626031c80a6\") " pod="kube-system/kube-proxy-c9gw5"
	I0507 19:55:43.982001    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: E0507 19:54:33.846904    1526 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0507 19:55:43.982001    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: E0507 19:54:33.847130    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d067d438-f4af-42e8-930d-3423a3ac211f-config-volume podName:d067d438-f4af-42e8-930d-3423a3ac211f nodeName:}" failed. No retries permitted until 2024-05-07 19:54:34.347095993 +0000 UTC m=+5.748601633 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d067d438-f4af-42e8-930d-3423a3ac211f-config-volume") pod "coredns-7db6d8ff4d-5j966" (UID: "d067d438-f4af-42e8-930d-3423a3ac211f") : object "kube-system"/"coredns" not registered
	I0507 19:55:43.982001    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: E0507 19:54:33.887296    1526 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0507 19:55:43.982001    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: E0507 19:54:33.887405    1526 projected.go:200] Error preparing data for projected volume kube-api-access-77z75 for pod default/busybox-fc5497c4f-gcqlv: object "default"/"kube-root-ca.crt" not registered
	I0507 19:55:43.982001    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: E0507 19:54:33.887613    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d98009ce-3495-481a-86b3-7c1e9422ca5a-kube-api-access-77z75 podName:d98009ce-3495-481a-86b3-7c1e9422ca5a nodeName:}" failed. No retries permitted until 2024-05-07 19:54:34.387566082 +0000 UTC m=+5.789071722 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-77z75" (UniqueName: "kubernetes.io/projected/d98009ce-3495-481a-86b3-7c1e9422ca5a-kube-api-access-77z75") pod "busybox-fc5497c4f-gcqlv" (UID: "d98009ce-3495-481a-86b3-7c1e9422ca5a") : object "default"/"kube-root-ca.crt" not registered
	I0507 19:55:43.982001    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: I0507 19:54:33.981303    1526 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-600000" podStartSLOduration=0.981289683 podStartE2EDuration="981.289683ms" podCreationTimestamp="2024-05-07 19:54:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-07 19:54:33.964275321 +0000 UTC m=+5.365780961" watchObservedRunningTime="2024-05-07 19:54:33.981289683 +0000 UTC m=+5.382795323"
	I0507 19:55:43.982001    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 kubelet[1526]: E0507 19:54:34.351653    1526 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0507 19:55:43.982001    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 kubelet[1526]: E0507 19:54:34.352036    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d067d438-f4af-42e8-930d-3423a3ac211f-config-volume podName:d067d438-f4af-42e8-930d-3423a3ac211f nodeName:}" failed. No retries permitted until 2024-05-07 19:54:35.352015549 +0000 UTC m=+6.753521289 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d067d438-f4af-42e8-930d-3423a3ac211f-config-volume") pod "coredns-7db6d8ff4d-5j966" (UID: "d067d438-f4af-42e8-930d-3423a3ac211f") : object "kube-system"/"coredns" not registered
	I0507 19:55:43.982001    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 kubelet[1526]: E0507 19:54:34.452926    1526 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0507 19:55:43.982001    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 kubelet[1526]: E0507 19:54:34.452966    1526 projected.go:200] Error preparing data for projected volume kube-api-access-77z75 for pod default/busybox-fc5497c4f-gcqlv: object "default"/"kube-root-ca.crt" not registered
	I0507 19:55:43.982001    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 kubelet[1526]: E0507 19:54:34.453012    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d98009ce-3495-481a-86b3-7c1e9422ca5a-kube-api-access-77z75 podName:d98009ce-3495-481a-86b3-7c1e9422ca5a nodeName:}" failed. No retries permitted until 2024-05-07 19:54:35.45299776 +0000 UTC m=+6.854503500 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-77z75" (UniqueName: "kubernetes.io/projected/d98009ce-3495-481a-86b3-7c1e9422ca5a-kube-api-access-77z75") pod "busybox-fc5497c4f-gcqlv" (UID: "d98009ce-3495-481a-86b3-7c1e9422ca5a") : object "default"/"kube-root-ca.crt" not registered
	I0507 19:55:43.982001    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 kubelet[1526]: I0507 19:54:34.661528    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="deb171c003562d2f3e3c8e1ec2fbec5ecaa700e48e277dd0cc50addf6cbb21a3"
	I0507 19:55:43.982001    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 kubelet[1526]: I0507 19:54:34.862381    1526 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4a96b44957f27b92ef21190115bc428" path="/var/lib/kubelet/pods/b4a96b44957f27b92ef21190115bc428/volumes"
	I0507 19:55:43.982524    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 kubelet[1526]: I0507 19:54:34.863294    1526 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d902475f151631231b80fe38edab39e8" path="/var/lib/kubelet/pods/d902475f151631231b80fe38edab39e8/volumes"
	I0507 19:55:43.982524    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 kubelet[1526]: I0507 19:54:34.938029    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="857f6b563091091373f72d143ed2af0ab7469cb77eb82675a7f665d172f1793a"
	I0507 19:55:43.982563    5068 command_runner.go:130] > May 07 19:54:35 multinode-600000 kubelet[1526]: I0507 19:54:35.108646    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09d2fda974adf9dbabc54b3412155043fbda490a951a6b325ac66ef3e385e99d"
	I0507 19:55:43.982589    5068 command_runner.go:130] > May 07 19:54:35 multinode-600000 kubelet[1526]: I0507 19:54:35.109054    1526 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-600000" podUID="c2ba4e1a-3041-4395-a246-9dd28358b95a"
	I0507 19:55:43.982589    5068 command_runner.go:130] > May 07 19:54:35 multinode-600000 kubelet[1526]: I0507 19:54:35.145688    1526 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-600000"
	I0507 19:55:43.982589    5068 command_runner.go:130] > May 07 19:54:35 multinode-600000 kubelet[1526]: E0507 19:54:35.358372    1526 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0507 19:55:43.982691    5068 command_runner.go:130] > May 07 19:54:35 multinode-600000 kubelet[1526]: E0507 19:54:35.358454    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d067d438-f4af-42e8-930d-3423a3ac211f-config-volume podName:d067d438-f4af-42e8-930d-3423a3ac211f nodeName:}" failed. No retries permitted until 2024-05-07 19:54:37.358438267 +0000 UTC m=+8.759943907 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d067d438-f4af-42e8-930d-3423a3ac211f-config-volume") pod "coredns-7db6d8ff4d-5j966" (UID: "d067d438-f4af-42e8-930d-3423a3ac211f") : object "kube-system"/"coredns" not registered
	I0507 19:55:43.982691    5068 command_runner.go:130] > May 07 19:54:35 multinode-600000 kubelet[1526]: E0507 19:54:35.459230    1526 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0507 19:55:43.982691    5068 command_runner.go:130] > May 07 19:54:35 multinode-600000 kubelet[1526]: E0507 19:54:35.459270    1526 projected.go:200] Error preparing data for projected volume kube-api-access-77z75 for pod default/busybox-fc5497c4f-gcqlv: object "default"/"kube-root-ca.crt" not registered
	I0507 19:55:43.982691    5068 command_runner.go:130] > May 07 19:54:35 multinode-600000 kubelet[1526]: E0507 19:54:35.459321    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d98009ce-3495-481a-86b3-7c1e9422ca5a-kube-api-access-77z75 podName:d98009ce-3495-481a-86b3-7c1e9422ca5a nodeName:}" failed. No retries permitted until 2024-05-07 19:54:37.459300671 +0000 UTC m=+8.860806411 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-77z75" (UniqueName: "kubernetes.io/projected/d98009ce-3495-481a-86b3-7c1e9422ca5a-kube-api-access-77z75") pod "busybox-fc5497c4f-gcqlv" (UID: "d98009ce-3495-481a-86b3-7c1e9422ca5a") : object "default"/"kube-root-ca.crt" not registered
	I0507 19:55:43.982691    5068 command_runner.go:130] > May 07 19:54:35 multinode-600000 kubelet[1526]: E0507 19:54:35.842389    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-5j966" podUID="d067d438-f4af-42e8-930d-3423a3ac211f"
	I0507 19:55:43.982691    5068 command_runner.go:130] > May 07 19:54:35 multinode-600000 kubelet[1526]: E0507 19:54:35.843885    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-gcqlv" podUID="d98009ce-3495-481a-86b3-7c1e9422ca5a"
	I0507 19:55:43.982691    5068 command_runner.go:130] > May 07 19:54:35 multinode-600000 kubelet[1526]: I0507 19:54:35.878265    1526 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-600000" podStartSLOduration=0.878244864 podStartE2EDuration="878.244864ms" podCreationTimestamp="2024-05-07 19:54:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-07 19:54:35.194323185 +0000 UTC m=+6.595828825" watchObservedRunningTime="2024-05-07 19:54:35.878244864 +0000 UTC m=+7.279750504"
	I0507 19:55:43.982691    5068 command_runner.go:130] > May 07 19:54:37 multinode-600000 kubelet[1526]: E0507 19:54:37.373090    1526 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0507 19:55:43.982691    5068 command_runner.go:130] > May 07 19:54:37 multinode-600000 kubelet[1526]: E0507 19:54:37.373161    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d067d438-f4af-42e8-930d-3423a3ac211f-config-volume podName:d067d438-f4af-42e8-930d-3423a3ac211f nodeName:}" failed. No retries permitted until 2024-05-07 19:54:41.373147008 +0000 UTC m=+12.774652748 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d067d438-f4af-42e8-930d-3423a3ac211f-config-volume") pod "coredns-7db6d8ff4d-5j966" (UID: "d067d438-f4af-42e8-930d-3423a3ac211f") : object "kube-system"/"coredns" not registered
	I0507 19:55:43.982691    5068 command_runner.go:130] > May 07 19:54:37 multinode-600000 kubelet[1526]: E0507 19:54:37.475199    1526 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0507 19:55:43.982691    5068 command_runner.go:130] > May 07 19:54:37 multinode-600000 kubelet[1526]: E0507 19:54:37.475408    1526 projected.go:200] Error preparing data for projected volume kube-api-access-77z75 for pod default/busybox-fc5497c4f-gcqlv: object "default"/"kube-root-ca.crt" not registered
	I0507 19:55:43.982691    5068 command_runner.go:130] > May 07 19:54:37 multinode-600000 kubelet[1526]: E0507 19:54:37.475544    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d98009ce-3495-481a-86b3-7c1e9422ca5a-kube-api-access-77z75 podName:d98009ce-3495-481a-86b3-7c1e9422ca5a nodeName:}" failed. No retries permitted until 2024-05-07 19:54:41.475519298 +0000 UTC m=+12.877025038 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-77z75" (UniqueName: "kubernetes.io/projected/d98009ce-3495-481a-86b3-7c1e9422ca5a-kube-api-access-77z75") pod "busybox-fc5497c4f-gcqlv" (UID: "d98009ce-3495-481a-86b3-7c1e9422ca5a") : object "default"/"kube-root-ca.crt" not registered
	I0507 19:55:43.982691    5068 command_runner.go:130] > May 07 19:54:37 multinode-600000 kubelet[1526]: E0507 19:54:37.842214    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-5j966" podUID="d067d438-f4af-42e8-930d-3423a3ac211f"
	I0507 19:55:43.982691    5068 command_runner.go:130] > May 07 19:54:37 multinode-600000 kubelet[1526]: E0507 19:54:37.842786    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-gcqlv" podUID="d98009ce-3495-481a-86b3-7c1e9422ca5a"
	I0507 19:55:43.982691    5068 command_runner.go:130] > May 07 19:54:39 multinode-600000 kubelet[1526]: E0507 19:54:39.842086    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-5j966" podUID="d067d438-f4af-42e8-930d-3423a3ac211f"
	I0507 19:55:43.982691    5068 command_runner.go:130] > May 07 19:54:39 multinode-600000 kubelet[1526]: E0507 19:54:39.842432    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-gcqlv" podUID="d98009ce-3495-481a-86b3-7c1e9422ca5a"
	I0507 19:55:43.982691    5068 command_runner.go:130] > May 07 19:54:41 multinode-600000 kubelet[1526]: E0507 19:54:41.418265    1526 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0507 19:55:43.982691    5068 command_runner.go:130] > May 07 19:54:41 multinode-600000 kubelet[1526]: E0507 19:54:41.418590    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d067d438-f4af-42e8-930d-3423a3ac211f-config-volume podName:d067d438-f4af-42e8-930d-3423a3ac211f nodeName:}" failed. No retries permitted until 2024-05-07 19:54:49.418553195 +0000 UTC m=+20.820058935 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d067d438-f4af-42e8-930d-3423a3ac211f-config-volume") pod "coredns-7db6d8ff4d-5j966" (UID: "d067d438-f4af-42e8-930d-3423a3ac211f") : object "kube-system"/"coredns" not registered
	I0507 19:55:43.982691    5068 command_runner.go:130] > May 07 19:54:41 multinode-600000 kubelet[1526]: E0507 19:54:41.518834    1526 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0507 19:55:43.982691    5068 command_runner.go:130] > May 07 19:54:41 multinode-600000 kubelet[1526]: E0507 19:54:41.519001    1526 projected.go:200] Error preparing data for projected volume kube-api-access-77z75 for pod default/busybox-fc5497c4f-gcqlv: object "default"/"kube-root-ca.crt" not registered
	I0507 19:55:43.982691    5068 command_runner.go:130] > May 07 19:54:41 multinode-600000 kubelet[1526]: E0507 19:54:41.519057    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d98009ce-3495-481a-86b3-7c1e9422ca5a-kube-api-access-77z75 podName:d98009ce-3495-481a-86b3-7c1e9422ca5a nodeName:}" failed. No retries permitted until 2024-05-07 19:54:49.519041878 +0000 UTC m=+20.920547618 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-77z75" (UniqueName: "kubernetes.io/projected/d98009ce-3495-481a-86b3-7c1e9422ca5a-kube-api-access-77z75") pod "busybox-fc5497c4f-gcqlv" (UID: "d98009ce-3495-481a-86b3-7c1e9422ca5a") : object "default"/"kube-root-ca.crt" not registered
	I0507 19:55:43.982691    5068 command_runner.go:130] > May 07 19:54:41 multinode-600000 kubelet[1526]: E0507 19:54:41.842245    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-5j966" podUID="d067d438-f4af-42e8-930d-3423a3ac211f"
	I0507 19:55:43.982691    5068 command_runner.go:130] > May 07 19:54:41 multinode-600000 kubelet[1526]: E0507 19:54:41.842350    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-gcqlv" podUID="d98009ce-3495-481a-86b3-7c1e9422ca5a"
	I0507 19:55:43.982691    5068 command_runner.go:130] > May 07 19:54:43 multinode-600000 kubelet[1526]: E0507 19:54:43.842034    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-5j966" podUID="d067d438-f4af-42e8-930d-3423a3ac211f"
	I0507 19:55:43.982691    5068 command_runner.go:130] > May 07 19:54:43 multinode-600000 kubelet[1526]: E0507 19:54:43.842216    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-gcqlv" podUID="d98009ce-3495-481a-86b3-7c1e9422ca5a"
	I0507 19:55:43.982691    5068 command_runner.go:130] > May 07 19:54:45 multinode-600000 kubelet[1526]: E0507 19:54:45.842657    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-5j966" podUID="d067d438-f4af-42e8-930d-3423a3ac211f"
	I0507 19:55:43.982691    5068 command_runner.go:130] > May 07 19:54:45 multinode-600000 kubelet[1526]: E0507 19:54:45.842807    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-gcqlv" podUID="d98009ce-3495-481a-86b3-7c1e9422ca5a"
	I0507 19:55:43.982691    5068 command_runner.go:130] > May 07 19:54:47 multinode-600000 kubelet[1526]: E0507 19:54:47.842575    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-5j966" podUID="d067d438-f4af-42e8-930d-3423a3ac211f"
	I0507 19:55:43.982691    5068 command_runner.go:130] > May 07 19:54:47 multinode-600000 kubelet[1526]: E0507 19:54:47.843152    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-gcqlv" podUID="d98009ce-3495-481a-86b3-7c1e9422ca5a"
	I0507 19:55:43.982691    5068 command_runner.go:130] > May 07 19:54:49 multinode-600000 kubelet[1526]: E0507 19:54:49.491796    1526 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0507 19:55:43.983736    5068 command_runner.go:130] > May 07 19:54:49 multinode-600000 kubelet[1526]: E0507 19:54:49.491989    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d067d438-f4af-42e8-930d-3423a3ac211f-config-volume podName:d067d438-f4af-42e8-930d-3423a3ac211f nodeName:}" failed. No retries permitted until 2024-05-07 19:55:05.491971903 +0000 UTC m=+36.893477643 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d067d438-f4af-42e8-930d-3423a3ac211f-config-volume") pod "coredns-7db6d8ff4d-5j966" (UID: "d067d438-f4af-42e8-930d-3423a3ac211f") : object "kube-system"/"coredns" not registered
	I0507 19:55:43.983736    5068 command_runner.go:130] > May 07 19:54:49 multinode-600000 kubelet[1526]: E0507 19:54:49.592490    1526 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0507 19:55:43.983736    5068 command_runner.go:130] > May 07 19:54:49 multinode-600000 kubelet[1526]: E0507 19:54:49.592595    1526 projected.go:200] Error preparing data for projected volume kube-api-access-77z75 for pod default/busybox-fc5497c4f-gcqlv: object "default"/"kube-root-ca.crt" not registered
	I0507 19:55:43.983736    5068 command_runner.go:130] > May 07 19:54:49 multinode-600000 kubelet[1526]: E0507 19:54:49.592653    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d98009ce-3495-481a-86b3-7c1e9422ca5a-kube-api-access-77z75 podName:d98009ce-3495-481a-86b3-7c1e9422ca5a nodeName:}" failed. No retries permitted until 2024-05-07 19:55:05.592637338 +0000 UTC m=+36.994142978 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-77z75" (UniqueName: "kubernetes.io/projected/d98009ce-3495-481a-86b3-7c1e9422ca5a-kube-api-access-77z75") pod "busybox-fc5497c4f-gcqlv" (UID: "d98009ce-3495-481a-86b3-7c1e9422ca5a") : object "default"/"kube-root-ca.crt" not registered
	I0507 19:55:43.983736    5068 command_runner.go:130] > May 07 19:54:49 multinode-600000 kubelet[1526]: E0507 19:54:49.842152    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-gcqlv" podUID="d98009ce-3495-481a-86b3-7c1e9422ca5a"
	I0507 19:55:43.983736    5068 command_runner.go:130] > May 07 19:54:49 multinode-600000 kubelet[1526]: E0507 19:54:49.842295    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-5j966" podUID="d067d438-f4af-42e8-930d-3423a3ac211f"
	I0507 19:55:43.983736    5068 command_runner.go:130] > May 07 19:54:51 multinode-600000 kubelet[1526]: E0507 19:54:51.841678    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-gcqlv" podUID="d98009ce-3495-481a-86b3-7c1e9422ca5a"
	I0507 19:55:43.983736    5068 command_runner.go:130] > May 07 19:54:51 multinode-600000 kubelet[1526]: E0507 19:54:51.841994    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-5j966" podUID="d067d438-f4af-42e8-930d-3423a3ac211f"
	I0507 19:55:43.983736    5068 command_runner.go:130] > May 07 19:54:53 multinode-600000 kubelet[1526]: E0507 19:54:53.841974    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-gcqlv" podUID="d98009ce-3495-481a-86b3-7c1e9422ca5a"
	I0507 19:55:43.983736    5068 command_runner.go:130] > May 07 19:54:53 multinode-600000 kubelet[1526]: E0507 19:54:53.842654    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-5j966" podUID="d067d438-f4af-42e8-930d-3423a3ac211f"
	I0507 19:55:43.983736    5068 command_runner.go:130] > May 07 19:54:55 multinode-600000 kubelet[1526]: E0507 19:54:55.842626    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-5j966" podUID="d067d438-f4af-42e8-930d-3423a3ac211f"
	I0507 19:55:43.983736    5068 command_runner.go:130] > May 07 19:54:55 multinode-600000 kubelet[1526]: E0507 19:54:55.842841    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-gcqlv" podUID="d98009ce-3495-481a-86b3-7c1e9422ca5a"
	I0507 19:55:43.983736    5068 command_runner.go:130] > May 07 19:54:57 multinode-600000 kubelet[1526]: E0507 19:54:57.841446    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-5j966" podUID="d067d438-f4af-42e8-930d-3423a3ac211f"
	I0507 19:55:43.983736    5068 command_runner.go:130] > May 07 19:54:57 multinode-600000 kubelet[1526]: E0507 19:54:57.842105    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-gcqlv" podUID="d98009ce-3495-481a-86b3-7c1e9422ca5a"
	I0507 19:55:43.983736    5068 command_runner.go:130] > May 07 19:54:59 multinode-600000 kubelet[1526]: E0507 19:54:59.842713    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-5j966" podUID="d067d438-f4af-42e8-930d-3423a3ac211f"
	I0507 19:55:43.983736    5068 command_runner.go:130] > May 07 19:54:59 multinode-600000 kubelet[1526]: E0507 19:54:59.842855    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-gcqlv" podUID="d98009ce-3495-481a-86b3-7c1e9422ca5a"
	I0507 19:55:43.984294    5068 command_runner.go:130] > May 07 19:55:01 multinode-600000 kubelet[1526]: E0507 19:55:01.842363    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-5j966" podUID="d067d438-f4af-42e8-930d-3423a3ac211f"
	I0507 19:55:43.984344    5068 command_runner.go:130] > May 07 19:55:01 multinode-600000 kubelet[1526]: E0507 19:55:01.842882    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-gcqlv" podUID="d98009ce-3495-481a-86b3-7c1e9422ca5a"
	I0507 19:55:43.984377    5068 command_runner.go:130] > May 07 19:55:03 multinode-600000 kubelet[1526]: E0507 19:55:03.841937    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-5j966" podUID="d067d438-f4af-42e8-930d-3423a3ac211f"
	I0507 19:55:43.984420    5068 command_runner.go:130] > May 07 19:55:03 multinode-600000 kubelet[1526]: E0507 19:55:03.841997    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-gcqlv" podUID="d98009ce-3495-481a-86b3-7c1e9422ca5a"
	I0507 19:55:43.984453    5068 command_runner.go:130] > May 07 19:55:05 multinode-600000 kubelet[1526]: I0507 19:55:05.501553    1526 scope.go:117] "RemoveContainer" containerID="232351adf489ab41e3b95183df116efc3adc75538ec9a57cef3b4ce608097033"
	I0507 19:55:43.984453    5068 command_runner.go:130] > May 07 19:55:05 multinode-600000 kubelet[1526]: I0507 19:55:05.501881    1526 scope.go:117] "RemoveContainer" containerID="d1e3e4629bc4ab52c27aca01f9ac01a28969e78a370077ee687920a51d952e19"
	I0507 19:55:43.984493    5068 command_runner.go:130] > May 07 19:55:05 multinode-600000 kubelet[1526]: E0507 19:55:05.502298    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(90142b77-53fb-42e1-94f8-7f8a3c7765ac)\"" pod="kube-system/storage-provisioner" podUID="90142b77-53fb-42e1-94f8-7f8a3c7765ac"
	I0507 19:55:43.984526    5068 command_runner.go:130] > May 07 19:55:05 multinode-600000 kubelet[1526]: E0507 19:55:05.529223    1526 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0507 19:55:43.984600    5068 command_runner.go:130] > May 07 19:55:05 multinode-600000 kubelet[1526]: E0507 19:55:05.529356    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d067d438-f4af-42e8-930d-3423a3ac211f-config-volume podName:d067d438-f4af-42e8-930d-3423a3ac211f nodeName:}" failed. No retries permitted until 2024-05-07 19:55:37.529338774 +0000 UTC m=+68.930844414 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d067d438-f4af-42e8-930d-3423a3ac211f-config-volume") pod "coredns-7db6d8ff4d-5j966" (UID: "d067d438-f4af-42e8-930d-3423a3ac211f") : object "kube-system"/"coredns" not registered
	I0507 19:55:43.984600    5068 command_runner.go:130] > May 07 19:55:05 multinode-600000 kubelet[1526]: E0507 19:55:05.629243    1526 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0507 19:55:43.984600    5068 command_runner.go:130] > May 07 19:55:05 multinode-600000 kubelet[1526]: E0507 19:55:05.629467    1526 projected.go:200] Error preparing data for projected volume kube-api-access-77z75 for pod default/busybox-fc5497c4f-gcqlv: object "default"/"kube-root-ca.crt" not registered
	I0507 19:55:43.984600    5068 command_runner.go:130] > May 07 19:55:05 multinode-600000 kubelet[1526]: E0507 19:55:05.629628    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d98009ce-3495-481a-86b3-7c1e9422ca5a-kube-api-access-77z75 podName:d98009ce-3495-481a-86b3-7c1e9422ca5a nodeName:}" failed. No retries permitted until 2024-05-07 19:55:37.629609811 +0000 UTC m=+69.031115551 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-77z75" (UniqueName: "kubernetes.io/projected/d98009ce-3495-481a-86b3-7c1e9422ca5a-kube-api-access-77z75") pod "busybox-fc5497c4f-gcqlv" (UID: "d98009ce-3495-481a-86b3-7c1e9422ca5a") : object "default"/"kube-root-ca.crt" not registered
	I0507 19:55:43.984600    5068 command_runner.go:130] > May 07 19:55:05 multinode-600000 kubelet[1526]: E0507 19:55:05.842421    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-5j966" podUID="d067d438-f4af-42e8-930d-3423a3ac211f"
	I0507 19:55:43.984600    5068 command_runner.go:130] > May 07 19:55:05 multinode-600000 kubelet[1526]: E0507 19:55:05.842632    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-gcqlv" podUID="d98009ce-3495-481a-86b3-7c1e9422ca5a"
	I0507 19:55:43.984600    5068 command_runner.go:130] > May 07 19:55:07 multinode-600000 kubelet[1526]: E0507 19:55:07.843040    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-gcqlv" podUID="d98009ce-3495-481a-86b3-7c1e9422ca5a"
	I0507 19:55:43.984600    5068 command_runner.go:130] > May 07 19:55:07 multinode-600000 kubelet[1526]: E0507 19:55:07.843857    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-5j966" podUID="d067d438-f4af-42e8-930d-3423a3ac211f"
	I0507 19:55:43.984600    5068 command_runner.go:130] > May 07 19:55:09 multinode-600000 kubelet[1526]: I0507 19:55:09.363617    1526 kubelet_node_status.go:497] "Fast updating node status as it just became ready"
	I0507 19:55:43.984600    5068 command_runner.go:130] > May 07 19:55:16 multinode-600000 kubelet[1526]: I0507 19:55:16.842451    1526 scope.go:117] "RemoveContainer" containerID="d1e3e4629bc4ab52c27aca01f9ac01a28969e78a370077ee687920a51d952e19"
	I0507 19:55:43.984600    5068 command_runner.go:130] > May 07 19:55:28 multinode-600000 kubelet[1526]: I0507 19:55:28.871479    1526 scope.go:117] "RemoveContainer" containerID="1ad9d594832564eb3ecbb3ab96ce2eec4cb095edf31a39c051d592ae068a9a6f"
	I0507 19:55:43.984600    5068 command_runner.go:130] > May 07 19:55:28 multinode-600000 kubelet[1526]: E0507 19:55:28.875911    1526 iptables.go:577] "Could not set up iptables canary" err=<
	I0507 19:55:43.984600    5068 command_runner.go:130] > May 07 19:55:28 multinode-600000 kubelet[1526]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0507 19:55:43.984600    5068 command_runner.go:130] > May 07 19:55:28 multinode-600000 kubelet[1526]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0507 19:55:43.984600    5068 command_runner.go:130] > May 07 19:55:28 multinode-600000 kubelet[1526]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0507 19:55:43.984600    5068 command_runner.go:130] > May 07 19:55:28 multinode-600000 kubelet[1526]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0507 19:55:43.984600    5068 command_runner.go:130] > May 07 19:55:28 multinode-600000 kubelet[1526]: I0507 19:55:28.916075    1526 scope.go:117] "RemoveContainer" containerID="675dcdcafeef04c4b82949c75f102ba97dda812ac3352b02e00d56d085f5d3bc"
	I0507 19:55:44.025154    5068 logs.go:123] Gathering logs for dmesg ...
	I0507 19:55:44.025154    5068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 19:55:44.050535    5068 command_runner.go:130] > [May 7 19:52] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0507 19:55:44.050535    5068 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0507 19:55:44.050535    5068 command_runner.go:130] > [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0507 19:55:44.050535    5068 command_runner.go:130] > [  +0.116232] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0507 19:55:44.050657    5068 command_runner.go:130] > [  +0.022195] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0507 19:55:44.050657    5068 command_runner.go:130] > [  +0.000003] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0507 19:55:44.050657    5068 command_runner.go:130] > [  +0.000001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0507 19:55:44.050657    5068 command_runner.go:130] > [  +0.059863] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0507 19:55:44.050657    5068 command_runner.go:130] > [  +0.024233] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0507 19:55:44.050657    5068 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0507 19:55:44.050657    5068 command_runner.go:130] > [May 7 19:53] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0507 19:55:44.050657    5068 command_runner.go:130] > [  +1.293154] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0507 19:55:44.050657    5068 command_runner.go:130] > [  +1.138766] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I0507 19:55:44.050801    5068 command_runner.go:130] > [  +7.459478] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0507 19:55:44.050801    5068 command_runner.go:130] > [  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0507 19:55:44.050801    5068 command_runner.go:130] > [  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	I0507 19:55:44.050801    5068 command_runner.go:130] > [ +43.605395] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	I0507 19:55:44.050801    5068 command_runner.go:130] > [  +0.173535] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	I0507 19:55:44.050801    5068 command_runner.go:130] > [May 7 19:54] systemd-fstab-generator[975]: Ignoring "noauto" option for root device
	I0507 19:55:44.050801    5068 command_runner.go:130] > [  +0.087049] kauditd_printk_skb: 73 callbacks suppressed
	I0507 19:55:44.050912    5068 command_runner.go:130] > [  +0.469142] systemd-fstab-generator[1013]: Ignoring "noauto" option for root device
	I0507 19:55:44.050912    5068 command_runner.go:130] > [  +0.182768] systemd-fstab-generator[1025]: Ignoring "noauto" option for root device
	I0507 19:55:44.050912    5068 command_runner.go:130] > [  +0.198440] systemd-fstab-generator[1039]: Ignoring "noauto" option for root device
	I0507 19:55:44.050912    5068 command_runner.go:130] > [  +2.865339] systemd-fstab-generator[1227]: Ignoring "noauto" option for root device
	I0507 19:55:44.050912    5068 command_runner.go:130] > [  +0.189423] systemd-fstab-generator[1239]: Ignoring "noauto" option for root device
	I0507 19:55:44.050912    5068 command_runner.go:130] > [  +0.164316] systemd-fstab-generator[1251]: Ignoring "noauto" option for root device
	I0507 19:55:44.050912    5068 command_runner.go:130] > [  +0.220106] systemd-fstab-generator[1266]: Ignoring "noauto" option for root device
	I0507 19:55:44.050912    5068 command_runner.go:130] > [  +0.801286] systemd-fstab-generator[1378]: Ignoring "noauto" option for root device
	I0507 19:55:44.051019    5068 command_runner.go:130] > [  +0.081896] kauditd_printk_skb: 205 callbacks suppressed
	I0507 19:55:44.051019    5068 command_runner.go:130] > [  +3.512673] systemd-fstab-generator[1519]: Ignoring "noauto" option for root device
	I0507 19:55:44.051019    5068 command_runner.go:130] > [  +1.511112] kauditd_printk_skb: 64 callbacks suppressed
	I0507 19:55:44.051019    5068 command_runner.go:130] > [  +5.012853] kauditd_printk_skb: 25 callbacks suppressed
	I0507 19:55:44.051019    5068 command_runner.go:130] > [  +3.386216] systemd-fstab-generator[2338]: Ignoring "noauto" option for root device
	I0507 19:55:44.051019    5068 command_runner.go:130] > [  +7.924740] kauditd_printk_skb: 55 callbacks suppressed
	I0507 19:55:44.052897    5068 logs.go:123] Gathering logs for etcd [ac320a872e77] ...
	I0507 19:55:44.052897    5068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac320a872e77"
	I0507 19:55:44.086283    5068 command_runner.go:130] ! {"level":"warn","ts":"2024-05-07T19:54:30.550295Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0507 19:55:44.086894    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.55691Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.19.135.22:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.19.135.22:2380","--initial-cluster=multinode-600000=https://172.19.135.22:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.19.135.22:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.19.135.22:2380","--name=multinode-600000","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy
-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0507 19:55:44.086894    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.557392Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0507 19:55:44.087080    5068 command_runner.go:130] ! {"level":"warn","ts":"2024-05-07T19:54:30.557435Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0507 19:55:44.087080    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.557445Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.19.135.22:2380"]}
	I0507 19:55:44.087191    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.557477Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0507 19:55:44.087191    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.567644Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.19.135.22:2379"]}
	I0507 19:55:44.087355    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.569078Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-600000","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.19.135.22:2380"],"listen-peer-urls":["https://172.19.135.22:2380"],"advertise-client-urls":["https://172.19.135.22:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.19.135.22:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initia
l-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0507 19:55:44.087355    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.589786Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"17.628697ms"}
	I0507 19:55:44.087355    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.62481Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0507 19:55:44.087462    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.649734Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"9263975694bef132","local-member-id":"aac5eb588ad33a11","commit-index":1911}
	I0507 19:55:44.087462    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.650002Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aac5eb588ad33a11 switched to configuration voters=()"}
	I0507 19:55:44.087462    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.650099Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aac5eb588ad33a11 became follower at term 2"}
	I0507 19:55:44.087570    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.650259Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft aac5eb588ad33a11 [peers: [], term: 2, commit: 1911, applied: 0, lastindex: 1911, lastterm: 2]"}
	I0507 19:55:44.087570    5068 command_runner.go:130] ! {"level":"warn","ts":"2024-05-07T19:54:30.665767Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I0507 19:55:44.087653    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.674281Z","caller":"mvcc/kvstore.go:341","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1115}
	I0507 19:55:44.087696    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.683184Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":1668}
	I0507 19:55:44.087727    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.694481Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0507 19:55:44.087773    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.704352Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"aac5eb588ad33a11","timeout":"7s"}
	I0507 19:55:44.087818    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.708328Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"aac5eb588ad33a11"}
	I0507 19:55:44.087818    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.708388Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"aac5eb588ad33a11","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	I0507 19:55:44.087818    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.710881Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	I0507 19:55:44.087818    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.711472Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0507 19:55:44.087915    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.71284Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0507 19:55:44.087915    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.712991Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0507 19:55:44.088015    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.713531Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aac5eb588ad33a11 switched to configuration voters=(12305500322378496529)"}
	I0507 19:55:44.088015    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.713649Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9263975694bef132","local-member-id":"aac5eb588ad33a11","added-peer-id":"aac5eb588ad33a11","added-peer-peer-urls":["https://172.19.143.74:2380"]}
	I0507 19:55:44.088015    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.714311Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9263975694bef132","local-member-id":"aac5eb588ad33a11","cluster-version":"3.5"}
	I0507 19:55:44.088427    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.714406Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0507 19:55:44.088787    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.727875Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0507 19:55:44.088787    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.733606Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.19.135.22:2380"}
	I0507 19:55:44.088787    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.733844Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.19.135.22:2380"}
	I0507 19:55:44.088787    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.734234Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"aac5eb588ad33a11","initial-advertise-peer-urls":["https://172.19.135.22:2380"],"listen-peer-urls":["https://172.19.135.22:2380"],"advertise-client-urls":["https://172.19.135.22:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.19.135.22:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0507 19:55:44.088787    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.735199Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0507 19:55:44.088787    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:32.251434Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aac5eb588ad33a11 is starting a new election at term 2"}
	I0507 19:55:44.089330    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:32.251481Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aac5eb588ad33a11 became pre-candidate at term 2"}
	I0507 19:55:44.089330    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:32.251511Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aac5eb588ad33a11 received MsgPreVoteResp from aac5eb588ad33a11 at term 2"}
	I0507 19:55:44.089330    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:32.251525Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aac5eb588ad33a11 became candidate at term 3"}
	I0507 19:55:44.089330    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:32.251534Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aac5eb588ad33a11 received MsgVoteResp from aac5eb588ad33a11 at term 3"}
	I0507 19:55:44.089330    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:32.251556Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aac5eb588ad33a11 became leader at term 3"}
	I0507 19:55:44.089330    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:32.251563Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aac5eb588ad33a11 elected leader aac5eb588ad33a11 at term 3"}
	I0507 19:55:44.089599    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:32.258987Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aac5eb588ad33a11","local-member-attributes":"{Name:multinode-600000 ClientURLs:[https://172.19.135.22:2379]}","request-path":"/0/members/aac5eb588ad33a11/attributes","cluster-id":"9263975694bef132","publish-timeout":"7s"}
	I0507 19:55:44.089707    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:32.259161Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0507 19:55:44.089707    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:32.259624Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0507 19:55:44.089707    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:32.259711Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0507 19:55:44.090746    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:32.259193Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0507 19:55:44.090864    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:32.263273Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.19.135.22:2379"}
	I0507 19:55:44.090864    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:32.265301Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0507 19:55:44.098034    5068 logs.go:123] Gathering logs for coredns [d27627c19808] ...
	I0507 19:55:44.098034    5068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d27627c19808"
	I0507 19:55:44.125963    5068 command_runner.go:130] > .:53
	I0507 19:55:44.125963    5068 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = a3820eb745a9a768a035bf81145ae0754aeb40457ffd5109db8c64dac842ada6c2edf6f9e6a410714e0f5cbc9cd90cb925a2fb37599adf58a40dc1bc5fa339b9
	I0507 19:55:44.125963    5068 command_runner.go:130] > CoreDNS-1.11.1
	I0507 19:55:44.125963    5068 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0507 19:55:44.125963    5068 command_runner.go:130] > [INFO] 127.0.0.1:50649 - 62527 "HINFO IN 8322179340745765625.4555534598598098973. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.052335947s
	I0507 19:55:44.127247    5068 logs.go:123] Gathering logs for kube-scheduler [45341720d5be] ...
	I0507 19:55:44.127247    5068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45341720d5be"
	I0507 19:55:44.150001    5068 command_runner.go:130] ! I0507 19:54:30.888703       1 serving.go:380] Generated self-signed cert in-memory
	I0507 19:55:44.150001    5068 command_runner.go:130] ! W0507 19:54:33.652802       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0507 19:55:44.150001    5068 command_runner.go:130] ! W0507 19:54:33.652844       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0507 19:55:44.150001    5068 command_runner.go:130] ! W0507 19:54:33.652885       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0507 19:55:44.150001    5068 command_runner.go:130] ! W0507 19:54:33.652896       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0507 19:55:44.150001    5068 command_runner.go:130] ! I0507 19:54:33.748572       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0507 19:55:44.150001    5068 command_runner.go:130] ! I0507 19:54:33.749371       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0507 19:55:44.150001    5068 command_runner.go:130] ! I0507 19:54:33.757368       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0507 19:55:44.150451    5068 command_runner.go:130] ! I0507 19:54:33.758296       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0507 19:55:44.150451    5068 command_runner.go:130] ! I0507 19:54:33.758449       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0507 19:55:44.150451    5068 command_runner.go:130] ! I0507 19:54:33.759872       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0507 19:55:44.150451    5068 command_runner.go:130] ! I0507 19:54:33.860140       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0507 19:55:44.153137    5068 logs.go:123] Gathering logs for kube-controller-manager [922d1e2b8745] ...
	I0507 19:55:44.153192    5068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 922d1e2b8745"
	I0507 19:55:44.176645    5068 command_runner.go:130] ! I0507 19:54:31.703073       1 serving.go:380] Generated self-signed cert in-memory
	I0507 19:55:44.176645    5068 command_runner.go:130] ! I0507 19:54:32.356571       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0507 19:55:44.176645    5068 command_runner.go:130] ! I0507 19:54:32.356606       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0507 19:55:44.177053    5068 command_runner.go:130] ! I0507 19:54:32.361009       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0507 19:55:44.177053    5068 command_runner.go:130] ! I0507 19:54:32.362062       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0507 19:55:44.177604    5068 command_runner.go:130] ! I0507 19:54:32.362316       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0507 19:55:44.177604    5068 command_runner.go:130] ! I0507 19:54:32.362806       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0507 19:55:44.177660    5068 command_runner.go:130] ! I0507 19:54:35.660463       1 controllermanager.go:759] "Started controller" controller="serviceaccount-token-controller"
	I0507 19:55:44.177660    5068 command_runner.go:130] ! I0507 19:54:35.661512       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0507 19:55:44.177693    5068 command_runner.go:130] ! I0507 19:54:35.672846       1 controllermanager.go:759] "Started controller" controller="cronjob-controller"
	I0507 19:55:44.177693    5068 command_runner.go:130] ! I0507 19:54:35.673901       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0507 19:55:44.177693    5068 command_runner.go:130] ! I0507 19:54:35.674100       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0507 19:55:44.177693    5068 command_runner.go:130] ! I0507 19:54:35.677134       1 controllermanager.go:759] "Started controller" controller="ttl-controller"
	I0507 19:55:44.177693    5068 command_runner.go:130] ! I0507 19:54:35.677224       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0507 19:55:44.177754    5068 command_runner.go:130] ! I0507 19:54:35.677646       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0507 19:55:44.177787    5068 command_runner.go:130] ! I0507 19:54:35.687463       1 controllermanager.go:759] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0507 19:55:44.177809    5068 command_runner.go:130] ! I0507 19:54:35.690256       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0507 19:55:44.177809    5068 command_runner.go:130] ! I0507 19:54:35.690418       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0507 19:55:44.177846    5068 command_runner.go:130] ! I0507 19:54:35.693293       1 controllermanager.go:759] "Started controller" controller="serviceaccount-controller"
	I0507 19:55:44.177846    5068 command_runner.go:130] ! I0507 19:54:35.693482       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0507 19:55:44.177886    5068 command_runner.go:130] ! I0507 19:54:35.693648       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0507 19:55:44.177886    5068 command_runner.go:130] ! I0507 19:54:35.705135       1 controllermanager.go:759] "Started controller" controller="garbage-collector-controller"
	I0507 19:55:44.177921    5068 command_runner.go:130] ! I0507 19:54:35.705560       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0507 19:55:44.177921    5068 command_runner.go:130] ! I0507 19:54:35.705715       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0507 19:55:44.177953    5068 command_runner.go:130] ! I0507 19:54:35.707645       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0507 19:55:44.177981    5068 command_runner.go:130] ! I0507 19:54:35.714544       1 controllermanager.go:759] "Started controller" controller="daemonset-controller"
	I0507 19:55:44.177981    5068 command_runner.go:130] ! I0507 19:54:35.714950       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0507 19:55:44.177981    5068 command_runner.go:130] ! I0507 19:54:35.714979       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0507 19:55:44.177981    5068 command_runner.go:130] ! I0507 19:54:35.718207       1 controllermanager.go:759] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0507 19:55:44.177981    5068 command_runner.go:130] ! I0507 19:54:35.718555       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0507 19:55:44.177981    5068 command_runner.go:130] ! I0507 19:54:35.719592       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0507 19:55:44.177981    5068 command_runner.go:130] ! I0507 19:54:35.721267       1 controllermanager.go:759] "Started controller" controller="statefulset-controller"
	I0507 19:55:44.177981    5068 command_runner.go:130] ! I0507 19:54:35.722621       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0507 19:55:44.177981    5068 command_runner.go:130] ! I0507 19:54:35.722870       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0507 19:55:44.177981    5068 command_runner.go:130] ! I0507 19:54:35.725345       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0507 19:55:44.177981    5068 command_runner.go:130] ! I0507 19:54:35.725516       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0507 19:55:44.177981    5068 command_runner.go:130] ! I0507 19:54:35.727155       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0507 19:55:44.177981    5068 command_runner.go:130] ! I0507 19:54:35.732889       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0507 19:55:44.177981    5068 command_runner.go:130] ! I0507 19:54:35.733036       1 controllermanager.go:759] "Started controller" controller="node-lifecycle-controller"
	I0507 19:55:44.177981    5068 command_runner.go:130] ! I0507 19:54:35.733340       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0507 19:55:44.177981    5068 command_runner.go:130] ! I0507 19:54:35.733465       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0507 19:55:44.177981    5068 command_runner.go:130] ! I0507 19:54:35.734424       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0507 19:55:44.177981    5068 command_runner.go:130] ! I0507 19:54:35.739429       1 controllermanager.go:759] "Started controller" controller="token-cleaner-controller"
	I0507 19:55:44.177981    5068 command_runner.go:130] ! I0507 19:54:35.740234       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0507 19:55:44.177981    5068 command_runner.go:130] ! I0507 19:54:35.740690       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0507 19:55:44.177981    5068 command_runner.go:130] ! I0507 19:54:35.740915       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0507 19:55:44.177981    5068 command_runner.go:130] ! E0507 19:54:35.758883       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0507 19:55:44.177981    5068 command_runner.go:130] ! I0507 19:54:35.759554       1 controllermanager.go:737] "Warning: skipping controller" controller="service-lb-controller"
	I0507 19:55:44.177981    5068 command_runner.go:130] ! I0507 19:54:35.764996       1 shared_informer.go:320] Caches are synced for tokens
	I0507 19:55:44.177981    5068 command_runner.go:130] ! I0507 19:54:35.770304       1 controllermanager.go:759] "Started controller" controller="persistentvolume-expander-controller"
	I0507 19:55:44.177981    5068 command_runner.go:130] ! I0507 19:54:35.770613       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0507 19:55:44.177981    5068 command_runner.go:130] ! I0507 19:54:35.771644       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0507 19:55:44.177981    5068 command_runner.go:130] ! I0507 19:54:35.773532       1 controllermanager.go:759] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0507 19:55:44.177981    5068 command_runner.go:130] ! I0507 19:54:35.773999       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0507 19:55:44.177981    5068 command_runner.go:130] ! I0507 19:54:35.776366       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0507 19:55:44.177981    5068 command_runner.go:130] ! I0507 19:54:35.776291       1 controllermanager.go:759] "Started controller" controller="pod-garbage-collector-controller"
	I0507 19:55:44.177981    5068 command_runner.go:130] ! I0507 19:54:35.777049       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0507 19:55:44.177981    5068 command_runner.go:130] ! I0507 19:54:35.778718       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0507 19:55:44.177981    5068 command_runner.go:130] ! I0507 19:54:35.782053       1 controllermanager.go:759] "Started controller" controller="disruption-controller"
	I0507 19:55:44.177981    5068 command_runner.go:130] ! I0507 19:54:35.782295       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0507 19:55:44.177981    5068 command_runner.go:130] ! I0507 19:54:35.783178       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0507 19:55:44.177981    5068 command_runner.go:130] ! I0507 19:54:35.783590       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0507 19:55:44.177981    5068 command_runner.go:130] ! I0507 19:54:35.785509       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0507 19:55:44.178512    5068 command_runner.go:130] ! I0507 19:54:35.785650       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0507 19:55:44.178512    5068 command_runner.go:130] ! I0507 19:54:35.785771       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0507 19:55:44.178512    5068 command_runner.go:130] ! I0507 19:54:35.786304       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0507 19:55:44.178512    5068 command_runner.go:130] ! I0507 19:54:35.786711       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0507 19:55:44.178512    5068 command_runner.go:130] ! I0507 19:54:35.788143       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0507 19:55:44.178512    5068 command_runner.go:130] ! I0507 19:54:35.788161       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0507 19:55:44.178512    5068 command_runner.go:130] ! I0507 19:54:35.788891       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0507 19:55:44.178512    5068 command_runner.go:130] ! I0507 19:54:35.788187       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0507 19:55:44.178512    5068 command_runner.go:130] ! I0507 19:54:35.788425       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0507 19:55:44.178687    5068 command_runner.go:130] ! I0507 19:54:35.789279       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0507 19:55:44.178687    5068 command_runner.go:130] ! I0507 19:54:35.788437       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0507 19:55:44.178687    5068 command_runner.go:130] ! I0507 19:54:35.788403       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0507 19:55:44.178687    5068 command_runner.go:130] ! E0507 19:54:35.794689       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0507 19:55:44.178750    5068 command_runner.go:130] ! I0507 19:54:35.794706       1 controllermanager.go:737] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0507 19:55:44.178750    5068 command_runner.go:130] ! I0507 19:54:35.797181       1 controllermanager.go:759] "Started controller" controller="persistentvolume-binder-controller"
	I0507 19:55:44.178750    5068 command_runner.go:130] ! I0507 19:54:35.797390       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0507 19:55:44.178801    5068 command_runner.go:130] ! I0507 19:54:35.797366       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0507 19:55:44.178801    5068 command_runner.go:130] ! I0507 19:54:35.798435       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0507 19:55:44.178801    5068 command_runner.go:130] ! I0507 19:54:35.799150       1 controllermanager.go:759] "Started controller" controller="taint-eviction-controller"
	I0507 19:55:44.178863    5068 command_runner.go:130] ! I0507 19:54:35.799419       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0507 19:55:44.178863    5068 command_runner.go:130] ! I0507 19:54:35.800319       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0507 19:55:44.178863    5068 command_runner.go:130] ! I0507 19:54:35.800396       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0507 19:55:44.178901    5068 command_runner.go:130] ! I0507 19:54:35.801149       1 controllermanager.go:759] "Started controller" controller="replicationcontroller-controller"
	I0507 19:55:44.178901    5068 command_runner.go:130] ! I0507 19:54:35.801340       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0507 19:55:44.178944    5068 command_runner.go:130] ! I0507 19:54:35.805459       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0507 19:55:44.178944    5068 command_runner.go:130] ! I0507 19:54:35.806312       1 controllermanager.go:759] "Started controller" controller="deployment-controller"
	I0507 19:55:44.178990    5068 command_runner.go:130] ! I0507 19:54:35.806898       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0507 19:55:44.178990    5068 command_runner.go:130] ! I0507 19:54:35.806915       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0507 19:55:44.178990    5068 command_runner.go:130] ! I0507 19:54:35.820458       1 controllermanager.go:759] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0507 19:55:44.179036    5068 command_runner.go:130] ! I0507 19:54:35.823993       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0507 19:55:44.179036    5068 command_runner.go:130] ! I0507 19:54:35.824174       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0507 19:55:44.179036    5068 command_runner.go:130] ! I0507 19:54:45.843537       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0507 19:55:44.179036    5068 command_runner.go:130] ! I0507 19:54:45.845601       1 controllermanager.go:759] "Started controller" controller="node-ipam-controller"
	I0507 19:55:44.179103    5068 command_runner.go:130] ! I0507 19:54:45.845839       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0507 19:55:44.179103    5068 command_runner.go:130] ! I0507 19:54:45.846020       1 shared_informer.go:313] Waiting for caches to sync for node
	I0507 19:55:44.179103    5068 command_runner.go:130] ! I0507 19:54:45.856361       1 controllermanager.go:759] "Started controller" controller="persistentvolume-protection-controller"
	I0507 19:55:44.179103    5068 command_runner.go:130] ! I0507 19:54:45.856445       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0507 19:55:44.179164    5068 command_runner.go:130] ! I0507 19:54:45.856582       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0507 19:55:44.179164    5068 command_runner.go:130] ! I0507 19:54:45.860605       1 controllermanager.go:759] "Started controller" controller="ttl-after-finished-controller"
	I0507 19:55:44.179164    5068 command_runner.go:130] ! I0507 19:54:45.861230       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0507 19:55:44.179164    5068 command_runner.go:130] ! I0507 19:54:45.861688       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0507 19:55:44.179226    5068 command_runner.go:130] ! I0507 19:54:45.882679       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0507 19:55:44.179226    5068 command_runner.go:130] ! I0507 19:54:45.882882       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0507 19:55:44.179268    5068 command_runner.go:130] ! I0507 19:54:45.883004       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0507 19:55:44.179268    5068 command_runner.go:130] ! I0507 19:54:45.883100       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0507 19:55:44.179268    5068 command_runner.go:130] ! I0507 19:54:45.883309       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0507 19:55:44.179321    5068 command_runner.go:130] ! I0507 19:54:45.883768       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0507 19:55:44.179321    5068 command_runner.go:130] ! I0507 19:54:45.884103       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0507 19:55:44.179365    5068 command_runner.go:130] ! I0507 19:54:45.884144       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0507 19:55:44.179365    5068 command_runner.go:130] ! I0507 19:54:45.884169       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0507 19:55:44.179412    5068 command_runner.go:130] ! I0507 19:54:45.884544       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0507 19:55:44.179412    5068 command_runner.go:130] ! I0507 19:54:45.884707       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0507 19:55:44.179451    5068 command_runner.go:130] ! I0507 19:54:45.884806       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0507 19:55:44.179451    5068 command_runner.go:130] ! I0507 19:54:45.884934       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0507 19:55:44.179493    5068 command_runner.go:130] ! I0507 19:54:45.884999       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0507 19:55:44.179493    5068 command_runner.go:130] ! I0507 19:54:45.885027       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0507 19:55:44.179532    5068 command_runner.go:130] ! I0507 19:54:45.885214       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0507 19:55:44.179532    5068 command_runner.go:130] ! I0507 19:54:45.885361       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0507 19:55:44.179584    5068 command_runner.go:130] ! I0507 19:54:45.885395       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0507 19:55:44.179623    5068 command_runner.go:130] ! I0507 19:54:45.885452       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0507 19:55:44.179623    5068 command_runner.go:130] ! I0507 19:54:45.885513       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0507 19:55:44.179676    5068 command_runner.go:130] ! I0507 19:54:45.885658       1 controllermanager.go:759] "Started controller" controller="resourcequota-controller"
	I0507 19:55:44.179712    5068 command_runner.go:130] ! I0507 19:54:45.885798       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0507 19:55:44.179712    5068 command_runner.go:130] ! I0507 19:54:45.885854       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0507 19:55:44.179758    5068 command_runner.go:130] ! I0507 19:54:45.885875       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0507 19:55:44.179758    5068 command_runner.go:130] ! I0507 19:54:45.888915       1 controllermanager.go:759] "Started controller" controller="replicaset-controller"
	I0507 19:55:44.179794    5068 command_runner.go:130] ! I0507 19:54:45.890326       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0507 19:55:44.179794    5068 command_runner.go:130] ! I0507 19:54:45.890549       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0507 19:55:44.179794    5068 command_runner.go:130] ! I0507 19:54:45.892442       1 controllermanager.go:759] "Started controller" controller="bootstrap-signer-controller"
	I0507 19:55:44.179841    5068 command_runner.go:130] ! I0507 19:54:45.892857       1 controllermanager.go:737] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0507 19:55:44.179841    5068 command_runner.go:130] ! I0507 19:54:45.892697       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0507 19:55:44.179841    5068 command_runner.go:130] ! I0507 19:54:45.895556       1 controllermanager.go:759] "Started controller" controller="endpointslice-controller"
	I0507 19:55:44.179877    5068 command_runner.go:130] ! I0507 19:54:45.896185       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0507 19:55:44.179877    5068 command_runner.go:130] ! I0507 19:54:45.896210       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0507 19:55:44.179923    5068 command_runner.go:130] ! I0507 19:54:45.898050       1 controllermanager.go:759] "Started controller" controller="endpointslice-mirroring-controller"
	I0507 19:55:44.179923    5068 command_runner.go:130] ! I0507 19:54:45.898440       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0507 19:55:44.179961    5068 command_runner.go:130] ! I0507 19:54:45.898466       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0507 19:55:44.179961    5068 command_runner.go:130] ! I0507 19:54:45.901016       1 controllermanager.go:759] "Started controller" controller="clusterrole-aggregation-controller"
	I0507 19:55:44.179961    5068 command_runner.go:130] ! I0507 19:54:45.901365       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0507 19:55:44.180005    5068 command_runner.go:130] ! I0507 19:54:45.901496       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0507 19:55:44.180005    5068 command_runner.go:130] ! I0507 19:54:45.904035       1 controllermanager.go:759] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0507 19:55:44.180042    5068 command_runner.go:130] ! I0507 19:54:45.906504       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0507 19:55:44.180042    5068 command_runner.go:130] ! I0507 19:54:45.906590       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0507 19:55:44.180086    5068 command_runner.go:130] ! I0507 19:54:45.936436       1 controllermanager.go:759] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0507 19:55:44.180086    5068 command_runner.go:130] ! I0507 19:54:45.936514       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0507 19:55:44.180086    5068 command_runner.go:130] ! I0507 19:54:45.936644       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0507 19:55:44.180124    5068 command_runner.go:130] ! I0507 19:54:45.950622       1 controllermanager.go:759] "Started controller" controller="namespace-controller"
	I0507 19:55:44.180124    5068 command_runner.go:130] ! I0507 19:54:45.950687       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0507 19:55:44.180169    5068 command_runner.go:130] ! I0507 19:54:45.952156       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0507 19:55:44.180169    5068 command_runner.go:130] ! I0507 19:54:45.960379       1 controllermanager.go:759] "Started controller" controller="job-controller"
	I0507 19:55:44.180206    5068 command_runner.go:130] ! I0507 19:54:45.960563       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0507 19:55:44.180206    5068 command_runner.go:130] ! I0507 19:54:45.960800       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0507 19:55:44.180251    5068 command_runner.go:130] ! I0507 19:54:45.960885       1 controllermanager.go:737] "Warning: skipping controller" controller="node-route-controller"
	I0507 19:55:44.180251    5068 command_runner.go:130] ! I0507 19:54:45.960448       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0507 19:55:44.180289    5068 command_runner.go:130] ! I0507 19:54:45.960996       1 shared_informer.go:313] Waiting for caches to sync for job
	I0507 19:55:44.180289    5068 command_runner.go:130] ! I0507 19:54:45.964056       1 controllermanager.go:759] "Started controller" controller="ephemeral-volume-controller"
	I0507 19:55:44.180289    5068 command_runner.go:130] ! I0507 19:54:45.964077       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0507 19:55:44.180333    5068 command_runner.go:130] ! I0507 19:54:45.964454       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0507 19:55:44.180333    5068 command_runner.go:130] ! I0507 19:54:45.967293       1 controllermanager.go:759] "Started controller" controller="endpoints-controller"
	I0507 19:55:44.180333    5068 command_runner.go:130] ! I0507 19:54:45.967699       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0507 19:55:44.180369    5068 command_runner.go:130] ! I0507 19:54:45.967884       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0507 19:55:44.180369    5068 command_runner.go:130] ! I0507 19:54:45.969920       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0507 19:55:44.180412    5068 command_runner.go:130] ! I0507 19:54:45.969950       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0507 19:55:44.180412    5068 command_runner.go:130] ! I0507 19:54:45.979639       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0507 19:55:44.180450    5068 command_runner.go:130] ! I0507 19:54:45.993084       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0507 19:55:44.180450    5068 command_runner.go:130] ! I0507 19:54:45.993911       1 shared_informer.go:320] Caches are synced for service account
	I0507 19:55:44.180450    5068 command_runner.go:130] ! I0507 19:54:46.001799       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0507 19:55:44.180494    5068 command_runner.go:130] ! I0507 19:54:46.002705       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0507 19:55:44.180494    5068 command_runner.go:130] ! I0507 19:54:46.006101       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0507 19:55:44.180494    5068 command_runner.go:130] ! I0507 19:54:46.008805       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0507 19:55:44.180531    5068 command_runner.go:130] ! I0507 19:54:46.014352       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0507 19:55:44.180531    5068 command_runner.go:130] ! I0507 19:54:46.021643       1 shared_informer.go:320] Caches are synced for crt configmap
	I0507 19:55:44.180575    5068 command_runner.go:130] ! I0507 19:54:46.023805       1 shared_informer.go:320] Caches are synced for stateful set
	I0507 19:55:44.180575    5068 command_runner.go:130] ! I0507 19:54:46.027827       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0507 19:55:44.180575    5068 command_runner.go:130] ! I0507 19:54:46.052799       1 shared_informer.go:320] Caches are synced for namespace
	I0507 19:55:44.180612    5068 command_runner.go:130] ! I0507 19:54:46.056820       1 shared_informer.go:320] Caches are synced for PV protection
	I0507 19:55:44.180612    5068 command_runner.go:130] ! I0507 19:54:46.062319       1 shared_informer.go:320] Caches are synced for job
	I0507 19:55:44.180612    5068 command_runner.go:130] ! I0507 19:54:46.062392       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0507 19:55:44.180657    5068 command_runner.go:130] ! I0507 19:54:46.065647       1 shared_informer.go:320] Caches are synced for ephemeral
	I0507 19:55:44.180657    5068 command_runner.go:130] ! I0507 19:54:46.068108       1 shared_informer.go:320] Caches are synced for endpoint
	I0507 19:55:44.180657    5068 command_runner.go:130] ! I0507 19:54:46.072892       1 shared_informer.go:320] Caches are synced for expand
	I0507 19:55:44.180693    5068 command_runner.go:130] ! I0507 19:54:46.075814       1 shared_informer.go:320] Caches are synced for cronjob
	I0507 19:55:44.180693    5068 command_runner.go:130] ! I0507 19:54:46.077269       1 shared_informer.go:320] Caches are synced for PVC protection
	I0507 19:55:44.180693    5068 command_runner.go:130] ! I0507 19:54:46.085427       1 shared_informer.go:320] Caches are synced for disruption
	I0507 19:55:44.180738    5068 command_runner.go:130] ! I0507 19:54:46.086039       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0507 19:55:44.180738    5068 command_runner.go:130] ! I0507 19:54:46.089158       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0507 19:55:44.180738    5068 command_runner.go:130] ! I0507 19:54:46.089172       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0507 19:55:44.180776    5068 command_runner.go:130] ! I0507 19:54:46.089394       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0507 19:55:44.180776    5068 command_runner.go:130] ! I0507 19:54:46.091216       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0507 19:55:44.180776    5068 command_runner.go:130] ! I0507 19:54:46.107002       1 shared_informer.go:320] Caches are synced for deployment
	I0507 19:55:44.180825    5068 command_runner.go:130] ! I0507 19:54:46.116997       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="25.691909ms"
	I0507 19:55:44.180825    5068 command_runner.go:130] ! I0507 19:54:46.118004       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="76.006µs"
	I0507 19:55:44.180862    5068 command_runner.go:130] ! I0507 19:54:46.123476       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="32.139964ms"
	I0507 19:55:44.180862    5068 command_runner.go:130] ! I0507 19:54:46.124362       1 shared_informer.go:320] Caches are synced for HPA
	I0507 19:55:44.180906    5068 command_runner.go:130] ! I0507 19:54:46.124468       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="121.91µs"
	I0507 19:55:44.180906    5068 command_runner.go:130] ! I0507 19:54:46.181088       1 shared_informer.go:320] Caches are synced for resource quota
	I0507 19:55:44.180942    5068 command_runner.go:130] ! I0507 19:54:46.189327       1 shared_informer.go:320] Caches are synced for resource quota
	I0507 19:55:44.180942    5068 command_runner.go:130] ! I0507 19:54:46.228301       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-600000-m02"
	I0507 19:55:44.180986    5068 command_runner.go:130] ! I0507 19:54:46.229031       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-600000-m02"
	I0507 19:55:44.180986    5068 command_runner.go:130] ! I0507 19:54:46.229515       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-600000-m02"
	I0507 19:55:44.181022    5068 command_runner.go:130] ! I0507 19:54:46.229843       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-600000\" does not exist"
	I0507 19:55:44.181067    5068 command_runner.go:130] ! I0507 19:54:46.229885       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-600000-m02\" does not exist"
	I0507 19:55:44.181103    5068 command_runner.go:130] ! I0507 19:54:46.229901       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-600000-m03\" does not exist"
	I0507 19:55:44.181103    5068 command_runner.go:130] ! I0507 19:54:46.234886       1 shared_informer.go:320] Caches are synced for taint
	I0507 19:55:44.181148    5068 command_runner.go:130] ! I0507 19:54:46.235155       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0507 19:55:44.181148    5068 command_runner.go:130] ! I0507 19:54:46.237527       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0507 19:55:44.181148    5068 command_runner.go:130] ! I0507 19:54:46.249515       1 shared_informer.go:320] Caches are synced for node
	I0507 19:55:44.181186    5068 command_runner.go:130] ! I0507 19:54:46.249660       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0507 19:55:44.181186    5068 command_runner.go:130] ! I0507 19:54:46.249700       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0507 19:55:44.181230    5068 command_runner.go:130] ! I0507 19:54:46.249711       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0507 19:55:44.181230    5068 command_runner.go:130] ! I0507 19:54:46.249718       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0507 19:55:44.181230    5068 command_runner.go:130] ! I0507 19:54:46.261687       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-600000"
	I0507 19:55:44.181267    5068 command_runner.go:130] ! I0507 19:54:46.261718       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-600000-m02"
	I0507 19:55:44.181311    5068 command_runner.go:130] ! I0507 19:54:46.261950       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-600000-m03"
	I0507 19:55:44.181311    5068 command_runner.go:130] ! I0507 19:54:46.263203       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0507 19:55:44.181347    5068 command_runner.go:130] ! I0507 19:54:46.282864       1 shared_informer.go:320] Caches are synced for GC
	I0507 19:55:44.181347    5068 command_runner.go:130] ! I0507 19:54:46.282948       1 shared_informer.go:320] Caches are synced for TTL
	I0507 19:55:44.181347    5068 command_runner.go:130] ! I0507 19:54:46.291375       1 shared_informer.go:320] Caches are synced for attach detach
	I0507 19:55:44.181386    5068 command_runner.go:130] ! I0507 19:54:46.296389       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0507 19:55:44.181386    5068 command_runner.go:130] ! I0507 19:54:46.299531       1 shared_informer.go:320] Caches are synced for persistent volume
	I0507 19:55:44.181386    5068 command_runner.go:130] ! I0507 19:54:46.301547       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0507 19:55:44.181422    5068 command_runner.go:130] ! I0507 19:54:46.315610       1 shared_informer.go:320] Caches are synced for daemon sets
	I0507 19:55:44.181422    5068 command_runner.go:130] ! I0507 19:54:46.707389       1 shared_informer.go:320] Caches are synced for garbage collector
	I0507 19:55:44.181422    5068 command_runner.go:130] ! I0507 19:54:46.707484       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0507 19:55:44.181422    5068 command_runner.go:130] ! I0507 19:54:46.714879       1 shared_informer.go:320] Caches are synced for garbage collector
	I0507 19:55:44.181422    5068 command_runner.go:130] ! I0507 19:55:09.379932       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-600000-m02"
	I0507 19:55:44.181422    5068 command_runner.go:130] ! I0507 19:55:26.356626       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.170086ms"
	I0507 19:55:44.181422    5068 command_runner.go:130] ! I0507 19:55:26.358052       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.002µs"
	I0507 19:55:44.181422    5068 command_runner.go:130] ! I0507 19:55:38.936045       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="86.905µs"
	I0507 19:55:44.181422    5068 command_runner.go:130] ! I0507 19:55:38.982779       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="30.443975ms"
	I0507 19:55:44.181422    5068 command_runner.go:130] ! I0507 19:55:38.983177       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="43.503µs"
	I0507 19:55:44.181422    5068 command_runner.go:130] ! I0507 19:55:39.007447       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.25642ms"
	I0507 19:55:44.181422    5068 command_runner.go:130] ! I0507 19:55:39.007824       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="337.32µs"
	I0507 19:55:44.196873    5068 logs.go:123] Gathering logs for kindnet [29b5cae0b8f1] ...
	I0507 19:55:44.196873    5068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29b5cae0b8f1"
	I0507 19:55:44.218128    5068 command_runner.go:130] ! I0507 19:54:35.653367       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0507 19:55:44.218364    5068 command_runner.go:130] ! I0507 19:54:35.653969       1 main.go:107] hostIP = 172.19.135.22
	I0507 19:55:44.218364    5068 command_runner.go:130] ! podIP = 172.19.143.74
	I0507 19:55:44.218364    5068 command_runner.go:130] ! W0507 19:54:35.653976       1 main.go:109] hostIP(= "172.19.135.22") != podIP(= "172.19.143.74") but must be running with host network: 
	I0507 19:55:44.218408    5068 command_runner.go:130] ! I0507 19:54:35.655401       1 main.go:116] setting mtu 1500 for CNI 
	I0507 19:55:44.218408    5068 command_runner.go:130] ! I0507 19:54:35.655532       1 main.go:146] kindnetd IP family: "ipv4"
	I0507 19:55:44.218408    5068 command_runner.go:130] ! I0507 19:54:35.655617       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0507 19:55:44.218408    5068 command_runner.go:130] ! I0507 19:55:05.983217       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0507 19:55:44.218408    5068 command_runner.go:130] ! I0507 19:55:06.001182       1 main.go:223] Handling node with IPs: map[172.19.135.22:{}]
	I0507 19:55:44.218408    5068 command_runner.go:130] ! I0507 19:55:06.001219       1 main.go:227] handling current node
	I0507 19:55:44.218408    5068 command_runner.go:130] ! I0507 19:55:06.001493       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:44.218408    5068 command_runner.go:130] ! I0507 19:55:06.001598       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:44.218530    5068 command_runner.go:130] ! I0507 19:55:06.001955       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.19.143.144 Flags: [] Table: 0} 
	I0507 19:55:44.218530    5068 command_runner.go:130] ! I0507 19:55:06.036933       1 main.go:223] Handling node with IPs: map[172.19.129.4:{}]
	I0507 19:55:44.218530    5068 command_runner.go:130] ! I0507 19:55:06.037052       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.3.0/24] 
	I0507 19:55:44.218579    5068 command_runner.go:130] ! I0507 19:55:06.037122       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.19.129.4 Flags: [] Table: 0} 
	I0507 19:55:44.218579    5068 command_runner.go:130] ! I0507 19:55:16.046470       1 main.go:223] Handling node with IPs: map[172.19.135.22:{}]
	I0507 19:55:44.218579    5068 command_runner.go:130] ! I0507 19:55:16.046556       1 main.go:227] handling current node
	I0507 19:55:44.218579    5068 command_runner.go:130] ! I0507 19:55:16.046569       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:44.218579    5068 command_runner.go:130] ! I0507 19:55:16.046577       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:44.218579    5068 command_runner.go:130] ! I0507 19:55:16.046933       1 main.go:223] Handling node with IPs: map[172.19.129.4:{}]
	I0507 19:55:44.218579    5068 command_runner.go:130] ! I0507 19:55:16.046957       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.3.0/24] 
	I0507 19:55:44.218579    5068 command_runner.go:130] ! I0507 19:55:26.058109       1 main.go:223] Handling node with IPs: map[172.19.135.22:{}]
	I0507 19:55:44.218579    5068 command_runner.go:130] ! I0507 19:55:26.058254       1 main.go:227] handling current node
	I0507 19:55:44.218579    5068 command_runner.go:130] ! I0507 19:55:26.058265       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:44.218579    5068 command_runner.go:130] ! I0507 19:55:26.058271       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:44.218579    5068 command_runner.go:130] ! I0507 19:55:26.058667       1 main.go:223] Handling node with IPs: map[172.19.129.4:{}]
	I0507 19:55:44.218579    5068 command_runner.go:130] ! I0507 19:55:26.058697       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.3.0/24] 
	I0507 19:55:44.218579    5068 command_runner.go:130] ! I0507 19:55:36.070650       1 main.go:223] Handling node with IPs: map[172.19.135.22:{}]
	I0507 19:55:44.218579    5068 command_runner.go:130] ! I0507 19:55:36.070781       1 main.go:227] handling current node
	I0507 19:55:44.218579    5068 command_runner.go:130] ! I0507 19:55:36.070793       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:44.218579    5068 command_runner.go:130] ! I0507 19:55:36.070834       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:44.218579    5068 command_runner.go:130] ! I0507 19:55:36.071124       1 main.go:223] Handling node with IPs: map[172.19.129.4:{}]
	I0507 19:55:44.218579    5068 command_runner.go:130] ! I0507 19:55:36.071149       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.3.0/24] 
	I0507 19:55:44.221151    5068 logs.go:123] Gathering logs for Docker ...
	I0507 19:55:44.221151    5068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 19:55:44.242269    5068 command_runner.go:130] > May 07 19:53:11 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0507 19:55:44.242269    5068 command_runner.go:130] > May 07 19:53:11 minikube cri-dockerd[223]: time="2024-05-07T19:53:11Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0507 19:55:44.242269    5068 command_runner.go:130] > May 07 19:53:11 minikube cri-dockerd[223]: time="2024-05-07T19:53:11Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0507 19:55:44.242269    5068 command_runner.go:130] > May 07 19:53:11 minikube cri-dockerd[223]: time="2024-05-07T19:53:11Z" level=info msg="Start docker client with request timeout 0s"
	I0507 19:55:44.242374    5068 command_runner.go:130] > May 07 19:53:11 minikube cri-dockerd[223]: time="2024-05-07T19:53:11Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0507 19:55:44.242374    5068 command_runner.go:130] > May 07 19:53:11 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0507 19:55:44.242374    5068 command_runner.go:130] > May 07 19:53:11 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0507 19:55:44.242374    5068 command_runner.go:130] > May 07 19:53:11 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0507 19:55:44.242441    5068 command_runner.go:130] > May 07 19:53:13 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0507 19:55:44.242441    5068 command_runner.go:130] > May 07 19:53:13 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0507 19:55:44.242441    5068 command_runner.go:130] > May 07 19:53:14 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0507 19:55:44.242441    5068 command_runner.go:130] > May 07 19:53:14 minikube cri-dockerd[420]: time="2024-05-07T19:53:14Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0507 19:55:44.242441    5068 command_runner.go:130] > May 07 19:53:14 minikube cri-dockerd[420]: time="2024-05-07T19:53:14Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0507 19:55:44.242441    5068 command_runner.go:130] > May 07 19:53:14 minikube cri-dockerd[420]: time="2024-05-07T19:53:14Z" level=info msg="Start docker client with request timeout 0s"
	I0507 19:55:44.242441    5068 command_runner.go:130] > May 07 19:53:14 minikube cri-dockerd[420]: time="2024-05-07T19:53:14Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0507 19:55:44.242441    5068 command_runner.go:130] > May 07 19:53:14 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0507 19:55:44.242441    5068 command_runner.go:130] > May 07 19:53:14 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0507 19:55:44.242441    5068 command_runner.go:130] > May 07 19:53:14 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0507 19:55:44.242441    5068 command_runner.go:130] > May 07 19:53:16 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0507 19:55:44.242441    5068 command_runner.go:130] > May 07 19:53:16 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0507 19:55:44.242441    5068 command_runner.go:130] > May 07 19:53:16 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0507 19:55:44.242441    5068 command_runner.go:130] > May 07 19:53:16 minikube cri-dockerd[428]: time="2024-05-07T19:53:16Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0507 19:55:44.242441    5068 command_runner.go:130] > May 07 19:53:16 minikube cri-dockerd[428]: time="2024-05-07T19:53:16Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0507 19:55:44.242441    5068 command_runner.go:130] > May 07 19:53:16 minikube cri-dockerd[428]: time="2024-05-07T19:53:16Z" level=info msg="Start docker client with request timeout 0s"
	I0507 19:55:44.242441    5068 command_runner.go:130] > May 07 19:53:16 minikube cri-dockerd[428]: time="2024-05-07T19:53:16Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0507 19:55:44.242441    5068 command_runner.go:130] > May 07 19:53:16 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0507 19:55:44.242441    5068 command_runner.go:130] > May 07 19:53:16 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0507 19:55:44.242441    5068 command_runner.go:130] > May 07 19:53:16 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0507 19:55:44.242441    5068 command_runner.go:130] > May 07 19:53:18 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0507 19:55:44.242441    5068 command_runner.go:130] > May 07 19:53:18 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0507 19:55:44.242441    5068 command_runner.go:130] > May 07 19:53:18 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0507 19:55:44.242441    5068 command_runner.go:130] > May 07 19:53:18 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0507 19:55:44.242441    5068 command_runner.go:130] > May 07 19:53:18 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0507 19:55:44.242441    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 systemd[1]: Starting Docker Application Container Engine...
	I0507 19:55:44.242441    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[656]: time="2024-05-07T19:53:56.261608662Z" level=info msg="Starting up"
	I0507 19:55:44.242441    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[656]: time="2024-05-07T19:53:56.264255181Z" level=info msg="containerd not running, starting managed containerd"
	I0507 19:55:44.242441    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[656]: time="2024-05-07T19:53:56.267798843Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=662
	I0507 19:55:44.242981    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.292663096Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0507 19:55:44.242981    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.316810753Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0507 19:55:44.242981    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.316928685Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0507 19:55:44.242981    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.317059021Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0507 19:55:44.242981    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.317074525Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0507 19:55:44.243085    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.317778516Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0507 19:55:44.243132    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.317870241Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0507 19:55:44.243174    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.318053591Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0507 19:55:44.243174    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.318181025Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0507 19:55:44.243174    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.318200831Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0507 19:55:44.243174    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.318211033Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0507 19:55:44.243174    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.318648452Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0507 19:55:44.243174    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.319370548Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0507 19:55:44.243174    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.322128697Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0507 19:55:44.243174    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.322287440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0507 19:55:44.243174    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.322423477Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0507 19:55:44.243423    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.322511301Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0507 19:55:44.243423    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.323103462Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0507 19:55:44.243423    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.323264406Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0507 19:55:44.243423    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.323281010Z" level=info msg="metadata content store policy set" policy=shared
	I0507 19:55:44.243423    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.329512102Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0507 19:55:44.243423    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.329607228Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0507 19:55:44.243565    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.329699453Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0507 19:55:44.243565    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.329991833Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0507 19:55:44.243565    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.330149675Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0507 19:55:44.243565    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.330391841Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0507 19:55:44.243565    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.331279682Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0507 19:55:44.243565    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.331558958Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0507 19:55:44.243565    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.331719502Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0507 19:55:44.243700    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.331752511Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0507 19:55:44.243700    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.331780218Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0507 19:55:44.243700    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.331804825Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0507 19:55:44.243700    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.332099005Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0507 19:55:44.243700    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.332235742Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0507 19:55:44.243700    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.332267150Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0507 19:55:44.243825    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.332290657Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0507 19:55:44.243825    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.332323766Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0507 19:55:44.244122    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.332346572Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0507 19:55:44.244213    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.332381181Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0507 19:55:44.244213    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.332407189Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0507 19:55:44.244283    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.332431795Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0507 19:55:44.244343    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.332459103Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0507 19:55:44.244343    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.332481509Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0507 19:55:44.244343    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.332504615Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0507 19:55:44.244343    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.332528722Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0507 19:55:44.244343    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.332552728Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0507 19:55:44.244343    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.332576134Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0507 19:55:44.244343    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.332603642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0507 19:55:44.244343    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.332625548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0507 19:55:44.244343    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.332651055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0507 19:55:44.244343    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.332673961Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0507 19:55:44.244343    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.333069468Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0507 19:55:44.244343    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.333235413Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0507 19:55:44.244343    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.333383554Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0507 19:55:44.244343    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.333414662Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0507 19:55:44.244875    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.333616417Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0507 19:55:44.244965    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.333710943Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0507 19:55:44.244965    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.333725547Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0507 19:55:44.245109    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.333736349Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0507 19:55:44.245109    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.333796266Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0507 19:55:44.245201    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.333810170Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0507 19:55:44.245201    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.333876888Z" level=info msg="NRI interface is disabled by configuration."
	I0507 19:55:44.245290    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.334581479Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0507 19:55:44.245378    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.334799638Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0507 19:55:44.245378    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.335014597Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0507 19:55:44.245378    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.335347487Z" level=info msg="containerd successfully booted in 0.045275s"
	I0507 19:55:44.245464    5068 command_runner.go:130] > May 07 19:53:57 multinode-600000 dockerd[656]: time="2024-05-07T19:53:57.321187459Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0507 19:55:44.245464    5068 command_runner.go:130] > May 07 19:53:57 multinode-600000 dockerd[656]: time="2024-05-07T19:53:57.476287680Z" level=info msg="Loading containers: start."
	I0507 19:55:44.245551    5068 command_runner.go:130] > May 07 19:53:57 multinode-600000 dockerd[656]: time="2024-05-07T19:53:57.877079663Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0507 19:55:44.245551    5068 command_runner.go:130] > May 07 19:53:57 multinode-600000 dockerd[656]: time="2024-05-07T19:53:57.952570655Z" level=info msg="Loading containers: done."
	I0507 19:55:44.245635    5068 command_runner.go:130] > May 07 19:53:57 multinode-600000 dockerd[656]: time="2024-05-07T19:53:57.979382413Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0507 19:55:44.245635    5068 command_runner.go:130] > May 07 19:53:57 multinode-600000 dockerd[656]: time="2024-05-07T19:53:57.980260841Z" level=info msg="Daemon has completed initialization"
	I0507 19:55:44.245635    5068 command_runner.go:130] > May 07 19:53:58 multinode-600000 dockerd[656]: time="2024-05-07T19:53:58.031005949Z" level=info msg="API listen on [::]:2376"
	I0507 19:55:44.245723    5068 command_runner.go:130] > May 07 19:53:58 multinode-600000 systemd[1]: Started Docker Application Container Engine.
	I0507 19:55:44.245723    5068 command_runner.go:130] > May 07 19:53:58 multinode-600000 dockerd[656]: time="2024-05-07T19:53:58.031256476Z" level=info msg="API listen on /var/run/docker.sock"
	I0507 19:55:44.245796    5068 command_runner.go:130] > May 07 19:54:20 multinode-600000 systemd[1]: Stopping Docker Application Container Engine...
	I0507 19:55:44.245835    5068 command_runner.go:130] > May 07 19:54:20 multinode-600000 dockerd[656]: time="2024-05-07T19:54:20.774198260Z" level=info msg="Processing signal 'terminated'"
	I0507 19:55:44.245835    5068 command_runner.go:130] > May 07 19:54:20 multinode-600000 dockerd[656]: time="2024-05-07T19:54:20.776613097Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0507 19:55:44.245835    5068 command_runner.go:130] > May 07 19:54:20 multinode-600000 dockerd[656]: time="2024-05-07T19:54:20.776805608Z" level=info msg="Daemon shutdown complete"
	I0507 19:55:44.245835    5068 command_runner.go:130] > May 07 19:54:20 multinode-600000 dockerd[656]: time="2024-05-07T19:54:20.776895213Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0507 19:55:44.245835    5068 command_runner.go:130] > May 07 19:54:20 multinode-600000 dockerd[656]: time="2024-05-07T19:54:20.776925814Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0507 19:55:44.245835    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 systemd[1]: docker.service: Deactivated successfully.
	I0507 19:55:44.245994    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 systemd[1]: Stopped Docker Application Container Engine.
	I0507 19:55:44.245994    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 systemd[1]: Starting Docker Application Container Engine...
	I0507 19:55:44.246082    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1047]: time="2024-05-07T19:54:21.844803108Z" level=info msg="Starting up"
	I0507 19:55:44.246082    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1047]: time="2024-05-07T19:54:21.845592952Z" level=info msg="containerd not running, starting managed containerd"
	I0507 19:55:44.246168    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1047]: time="2024-05-07T19:54:21.846791420Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1053
	I0507 19:55:44.246252    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.877926981Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0507 19:55:44.246252    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.907006826Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0507 19:55:44.246338    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.907105131Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0507 19:55:44.246338    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.907143533Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0507 19:55:44.246434    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.907156034Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0507 19:55:44.246434    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.907277841Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0507 19:55:44.246516    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.907322244Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0507 19:55:44.246516    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.907477852Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0507 19:55:44.246620    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.907596759Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0507 19:55:44.246620    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.907616260Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0507 19:55:44.246706    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.907627661Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0507 19:55:44.246706    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.907658363Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0507 19:55:44.246793    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.907868674Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0507 19:55:44.246878    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.910668333Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0507 19:55:44.246878    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.910832542Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0507 19:55:44.246967    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.910974650Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0507 19:55:44.246967    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.911056755Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0507 19:55:44.248238    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.911079056Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0507 19:55:44.248238    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.911093757Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0507 19:55:44.248238    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.911103457Z" level=info msg="metadata content store policy set" policy=shared
	I0507 19:55:44.248238    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.911348471Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0507 19:55:44.248238    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.911388073Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0507 19:55:44.248238    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.911402674Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0507 19:55:44.248238    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.911415475Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0507 19:55:44.248238    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.911427076Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0507 19:55:44.248769    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.911464678Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0507 19:55:44.248814    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.911666589Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0507 19:55:44.248870    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.911840999Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0507 19:55:44.248910    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.911855900Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0507 19:55:44.248958    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.911868601Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0507 19:55:44.249043    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.911909603Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0507 19:55:44.249084    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.911924204Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0507 19:55:44.249130    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.911941405Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0507 19:55:44.249215    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.911955506Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0507 19:55:44.249255    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.911969406Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0507 19:55:44.249340    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.911987907Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0507 19:55:44.249385    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.912002408Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0507 19:55:44.249426    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.912014509Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0507 19:55:44.249471    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.912032910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0507 19:55:44.249512    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.912048811Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0507 19:55:44.249558    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.912061212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0507 19:55:44.249600    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.912073812Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0507 19:55:44.249822    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.912085813Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0507 19:55:44.249910    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.912098614Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0507 19:55:44.249994    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.912110514Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0507 19:55:44.250024    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.912123015Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0507 19:55:44.250069    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.912136916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0507 19:55:44.250110    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.912151617Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0507 19:55:44.250196    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.912162617Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0507 19:55:44.250247    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.912174218Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0507 19:55:44.250287    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.912189019Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0507 19:55:44.250378    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.912203420Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0507 19:55:44.250428    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.912223321Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0507 19:55:44.250518    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.912235321Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0507 19:55:44.250558    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.912245922Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0507 19:55:44.250558    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.912307726Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0507 19:55:44.250648    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.912877958Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0507 19:55:44.250737    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.912987064Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0507 19:55:44.250787    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.913005665Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0507 19:55:44.250872    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.913060968Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0507 19:55:44.252351    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.913148473Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0507 19:55:44.252351    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.913162874Z" level=info msg="NRI interface is disabled by configuration."
	I0507 19:55:44.252351    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.913518894Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0507 19:55:44.252351    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.913666902Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0507 19:55:44.252351    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.913836712Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0507 19:55:44.252351    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.913869014Z" level=info msg="containerd successfully booted in 0.037038s"
	I0507 19:55:44.252351    5068 command_runner.go:130] > May 07 19:54:22 multinode-600000 dockerd[1047]: time="2024-05-07T19:54:22.886642029Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0507 19:55:44.252351    5068 command_runner.go:130] > May 07 19:54:22 multinode-600000 dockerd[1047]: time="2024-05-07T19:54:22.917701485Z" level=info msg="Loading containers: start."
	I0507 19:55:44.252351    5068 command_runner.go:130] > May 07 19:54:23 multinode-600000 dockerd[1047]: time="2024-05-07T19:54:23.220079986Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0507 19:55:44.252351    5068 command_runner.go:130] > May 07 19:54:23 multinode-600000 dockerd[1047]: time="2024-05-07T19:54:23.297928389Z" level=info msg="Loading containers: done."
	I0507 19:55:44.252351    5068 command_runner.go:130] > May 07 19:54:23 multinode-600000 dockerd[1047]: time="2024-05-07T19:54:23.323426131Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0507 19:55:44.252351    5068 command_runner.go:130] > May 07 19:54:23 multinode-600000 dockerd[1047]: time="2024-05-07T19:54:23.323561939Z" level=info msg="Daemon has completed initialization"
	I0507 19:55:44.252351    5068 command_runner.go:130] > May 07 19:54:23 multinode-600000 dockerd[1047]: time="2024-05-07T19:54:23.371361642Z" level=info msg="API listen on /var/run/docker.sock"
	I0507 19:55:44.252873    5068 command_runner.go:130] > May 07 19:54:23 multinode-600000 dockerd[1047]: time="2024-05-07T19:54:23.371563053Z" level=info msg="API listen on [::]:2376"
	I0507 19:55:44.252873    5068 command_runner.go:130] > May 07 19:54:23 multinode-600000 systemd[1]: Started Docker Application Container Engine.
	I0507 19:55:44.252927    5068 command_runner.go:130] > May 07 19:54:24 multinode-600000 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0507 19:55:44.252998    5068 command_runner.go:130] > May 07 19:54:24 multinode-600000 cri-dockerd[1274]: time="2024-05-07T19:54:24Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0507 19:55:44.252998    5068 command_runner.go:130] > May 07 19:54:24 multinode-600000 cri-dockerd[1274]: time="2024-05-07T19:54:24Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0507 19:55:44.252998    5068 command_runner.go:130] > May 07 19:54:24 multinode-600000 cri-dockerd[1274]: time="2024-05-07T19:54:24Z" level=info msg="Start docker client with request timeout 0s"
	I0507 19:55:44.252998    5068 command_runner.go:130] > May 07 19:54:24 multinode-600000 cri-dockerd[1274]: time="2024-05-07T19:54:24Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0507 19:55:44.252998    5068 command_runner.go:130] > May 07 19:54:24 multinode-600000 cri-dockerd[1274]: time="2024-05-07T19:54:24Z" level=info msg="Loaded network plugin cni"
	I0507 19:55:44.252998    5068 command_runner.go:130] > May 07 19:54:24 multinode-600000 cri-dockerd[1274]: time="2024-05-07T19:54:24Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0507 19:55:44.252998    5068 command_runner.go:130] > May 07 19:54:24 multinode-600000 cri-dockerd[1274]: time="2024-05-07T19:54:24Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0507 19:55:44.252998    5068 command_runner.go:130] > May 07 19:54:24 multinode-600000 cri-dockerd[1274]: time="2024-05-07T19:54:24Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0507 19:55:44.252998    5068 command_runner.go:130] > May 07 19:54:24 multinode-600000 cri-dockerd[1274]: time="2024-05-07T19:54:24Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0507 19:55:44.252998    5068 command_runner.go:130] > May 07 19:54:24 multinode-600000 cri-dockerd[1274]: time="2024-05-07T19:54:24Z" level=info msg="Start cri-dockerd grpc backend"
	I0507 19:55:44.252998    5068 command_runner.go:130] > May 07 19:54:24 multinode-600000 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0507 19:55:44.252998    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 cri-dockerd[1274]: time="2024-05-07T19:54:28Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-5j966_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"99af61c6e282aa13c7209e469e5e354f24968796fc455a65fdf2e8611f760994\""
	I0507 19:55:44.252998    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 cri-dockerd[1274]: time="2024-05-07T19:54:28Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-fc5497c4f-gcqlv_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"4afb10dc8b11575b4eaa25a6b283141c6e029c9b44d3db3a69e4c934171b778e\""
	I0507 19:55:44.252998    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:29.542938073Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0507 19:55:44.252998    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:29.543010577Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0507 19:55:44.252998    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:29.543042179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:44.252998    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:29.543273292Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:44.252998    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 cri-dockerd[1274]: time="2024-05-07T19:54:29Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/89c8a2313bcaf38f51cf6dbb015e4b3d1ed11fef724fa2a2ecfd86165a93435e/resolv.conf as [nameserver 172.19.128.1]"
	I0507 19:55:44.252998    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:29.675480269Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0507 19:55:44.252998    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:29.675546573Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0507 19:55:44.252998    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:29.675564974Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:44.252998    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:29.684262666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:44.253526    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:29.725921222Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0507 19:55:44.253569    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:29.726068230Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0507 19:55:44.253569    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:29.726254241Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:44.253612    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:29.726575359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:44.253643    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:29.765272147Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0507 19:55:44.253643    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:29.765421056Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0507 19:55:44.253643    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:29.765494660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:44.253643    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:29.766208600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:44.253643    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 cri-dockerd[1274]: time="2024-05-07T19:54:29Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5c37290307d14956d6c732916d8f8cad779b8e57047c0b20cc5a97abeea21709/resolv.conf as [nameserver 172.19.128.1]"
	I0507 19:55:44.253643    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 cri-dockerd[1274]: time="2024-05-07T19:54:29Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c666fba0d07531cb6ff4a110f6538c8fbffaa474e8b7744eecd95c2c5449ac24/resolv.conf as [nameserver 172.19.128.1]"
	I0507 19:55:44.253643    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:29.943914850Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0507 19:55:44.253643    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:29.944218768Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0507 19:55:44.253643    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:29.944339474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:44.253643    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:29.944568887Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:44.253643    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 cri-dockerd[1274]: time="2024-05-07T19:54:29Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fec63580ff2669cca3046ae403d6a288bb279ca84766c91bd6464d8b2335c567/resolv.conf as [nameserver 172.19.128.1]"
	I0507 19:55:44.253643    5068 command_runner.go:130] > May 07 19:54:30 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:30.094912590Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0507 19:55:44.253643    5068 command_runner.go:130] > May 07 19:54:30 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:30.095972050Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0507 19:55:44.253643    5068 command_runner.go:130] > May 07 19:54:30 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:30.096703691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:44.253643    5068 command_runner.go:130] > May 07 19:54:30 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:30.098389387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:44.253643    5068 command_runner.go:130] > May 07 19:54:30 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:30.174777807Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0507 19:55:44.253643    5068 command_runner.go:130] > May 07 19:54:30 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:30.174917115Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0507 19:55:44.253643    5068 command_runner.go:130] > May 07 19:54:30 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:30.174947116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:44.253643    5068 command_runner.go:130] > May 07 19:54:30 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:30.175427944Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:44.253643    5068 command_runner.go:130] > May 07 19:54:30 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:30.179401568Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0507 19:55:44.253643    5068 command_runner.go:130] > May 07 19:54:30 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:30.180225415Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0507 19:55:44.254236    5068 command_runner.go:130] > May 07 19:54:30 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:30.180387824Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:44.254236    5068 command_runner.go:130] > May 07 19:54:30 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:30.180691941Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:44.254308    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 cri-dockerd[1274]: time="2024-05-07T19:54:33Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0507 19:55:44.254308    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:34.393545198Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0507 19:55:44.254308    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:34.393776611Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0507 19:55:44.254308    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:34.393798612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:44.254308    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:34.393904518Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:44.254308    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:34.429313521Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0507 19:55:44.254308    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:34.429355823Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0507 19:55:44.254308    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:34.429371924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:44.254308    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:34.429510732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:44.254308    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:34.450929143Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0507 19:55:44.254308    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:34.451230160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0507 19:55:44.254308    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:34.451320165Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:44.254308    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:34.451541578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:44.254308    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 cri-dockerd[1274]: time="2024-05-07T19:54:34Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/09d2fda974adf9dbabc54b3412155043fbda490a951a6b325ac66ef3e385e99d/resolv.conf as [nameserver 172.19.128.1]"
	I0507 19:55:44.254308    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 cri-dockerd[1274]: time="2024-05-07T19:54:34Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/deb171c003562d2f3e3c8e1ec2fbec5ecaa700e48e277dd0cc50addf6cbb21a3/resolv.conf as [nameserver 172.19.128.1]"
	I0507 19:55:44.254308    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 cri-dockerd[1274]: time="2024-05-07T19:54:34Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/857f6b563091091373f72d143ed2af0ab7469cb77eb82675a7f665d172f1793a/resolv.conf as [nameserver 172.19.128.1]"
	I0507 19:55:44.254308    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:34.950666506Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0507 19:55:44.254836    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:34.951075429Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0507 19:55:44.254908    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:34.951189235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:44.254908    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:34.951373146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:44.254959    5068 command_runner.go:130] > May 07 19:54:35 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:35.055721147Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0507 19:55:44.254991    5068 command_runner.go:130] > May 07 19:54:35 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:35.055815952Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0507 19:55:44.254991    5068 command_runner.go:130] > May 07 19:54:35 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:35.055860855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:44.255029    5068 command_runner.go:130] > May 07 19:54:35 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:35.056635099Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:44.255059    5068 command_runner.go:130] > May 07 19:54:35 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:35.189264699Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0507 19:55:44.255059    5068 command_runner.go:130] > May 07 19:54:35 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:35.189723325Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0507 19:55:44.255059    5068 command_runner.go:130] > May 07 19:54:35 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:35.189831731Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:44.255059    5068 command_runner.go:130] > May 07 19:54:35 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:35.190012442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:44.255059    5068 command_runner.go:130] > May 07 19:55:05 multinode-600000 dockerd[1047]: time="2024-05-07T19:55:05.347820040Z" level=info msg="ignoring event" container=d1e3e4629bc4ab52c27aca01f9ac01a28969e78a370077ee687920a51d952e19 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0507 19:55:44.255059    5068 command_runner.go:130] > May 07 19:55:05 multinode-600000 dockerd[1053]: time="2024-05-07T19:55:05.348040655Z" level=info msg="shim disconnected" id=d1e3e4629bc4ab52c27aca01f9ac01a28969e78a370077ee687920a51d952e19 namespace=moby
	I0507 19:55:44.255059    5068 command_runner.go:130] > May 07 19:55:05 multinode-600000 dockerd[1053]: time="2024-05-07T19:55:05.348091458Z" level=warning msg="cleaning up after shim disconnected" id=d1e3e4629bc4ab52c27aca01f9ac01a28969e78a370077ee687920a51d952e19 namespace=moby
	I0507 19:55:44.255059    5068 command_runner.go:130] > May 07 19:55:05 multinode-600000 dockerd[1053]: time="2024-05-07T19:55:05.348099558Z" level=info msg="cleaning up dead shim" namespace=moby
	I0507 19:55:44.255059    5068 command_runner.go:130] > May 07 19:55:17 multinode-600000 dockerd[1053]: time="2024-05-07T19:55:17.037412688Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0507 19:55:44.255059    5068 command_runner.go:130] > May 07 19:55:17 multinode-600000 dockerd[1053]: time="2024-05-07T19:55:17.037563097Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0507 19:55:44.255059    5068 command_runner.go:130] > May 07 19:55:17 multinode-600000 dockerd[1053]: time="2024-05-07T19:55:17.037957521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:44.255059    5068 command_runner.go:130] > May 07 19:55:17 multinode-600000 dockerd[1053]: time="2024-05-07T19:55:17.038368445Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:44.255059    5068 command_runner.go:130] > May 07 19:55:38 multinode-600000 dockerd[1053]: time="2024-05-07T19:55:38.073681495Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0507 19:55:44.255059    5068 command_runner.go:130] > May 07 19:55:38 multinode-600000 dockerd[1053]: time="2024-05-07T19:55:38.075144480Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0507 19:55:44.255586    5068 command_runner.go:130] > May 07 19:55:38 multinode-600000 dockerd[1053]: time="2024-05-07T19:55:38.075421996Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:44.255628    5068 command_runner.go:130] > May 07 19:55:38 multinode-600000 dockerd[1053]: time="2024-05-07T19:55:38.075618907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:44.255628    5068 command_runner.go:130] > May 07 19:55:38 multinode-600000 dockerd[1053]: time="2024-05-07T19:55:38.083978388Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0507 19:55:44.255661    5068 command_runner.go:130] > May 07 19:55:38 multinode-600000 dockerd[1053]: time="2024-05-07T19:55:38.085517877Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0507 19:55:44.255661    5068 command_runner.go:130] > May 07 19:55:38 multinode-600000 dockerd[1053]: time="2024-05-07T19:55:38.085609682Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:44.255661    5068 command_runner.go:130] > May 07 19:55:38 multinode-600000 dockerd[1053]: time="2024-05-07T19:55:38.085891498Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:44.255661    5068 command_runner.go:130] > May 07 19:55:38 multinode-600000 cri-dockerd[1274]: time="2024-05-07T19:55:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/56c438bec17775a85810d84da03e966b7c8b3307695f327170eb2d1f6f413190/resolv.conf as [nameserver 172.19.128.1]"
	I0507 19:55:44.255661    5068 command_runner.go:130] > May 07 19:55:38 multinode-600000 cri-dockerd[1274]: time="2024-05-07T19:55:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f8dc35309168fbb7208444e18cedbe0a5ab2522d363e8b998b56b731b941b23c/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0507 19:55:44.255661    5068 command_runner.go:130] > May 07 19:55:38 multinode-600000 dockerd[1053]: time="2024-05-07T19:55:38.552043154Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0507 19:55:44.255661    5068 command_runner.go:130] > May 07 19:55:38 multinode-600000 dockerd[1053]: time="2024-05-07T19:55:38.552176862Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0507 19:55:44.255661    5068 command_runner.go:130] > May 07 19:55:38 multinode-600000 dockerd[1053]: time="2024-05-07T19:55:38.552192263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:44.255661    5068 command_runner.go:130] > May 07 19:55:38 multinode-600000 dockerd[1053]: time="2024-05-07T19:55:38.552275368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:44.255661    5068 command_runner.go:130] > May 07 19:55:38 multinode-600000 dockerd[1053]: time="2024-05-07T19:55:38.595560233Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0507 19:55:44.255661    5068 command_runner.go:130] > May 07 19:55:38 multinode-600000 dockerd[1053]: time="2024-05-07T19:55:38.595882353Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0507 19:55:44.255661    5068 command_runner.go:130] > May 07 19:55:38 multinode-600000 dockerd[1053]: time="2024-05-07T19:55:38.595904855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:44.255661    5068 command_runner.go:130] > May 07 19:55:38 multinode-600000 dockerd[1053]: time="2024-05-07T19:55:38.596079265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:44.256184    5068 command_runner.go:130] > May 07 19:55:40 multinode-600000 dockerd[1047]: 2024/05/07 19:55:40 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0507 19:55:44.256243    5068 command_runner.go:130] > May 07 19:55:40 multinode-600000 dockerd[1047]: 2024/05/07 19:55:40 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0507 19:55:44.256285    5068 command_runner.go:130] > May 07 19:55:40 multinode-600000 dockerd[1047]: 2024/05/07 19:55:40 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0507 19:55:44.256317    5068 command_runner.go:130] > May 07 19:55:40 multinode-600000 dockerd[1047]: 2024/05/07 19:55:40 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0507 19:55:44.256317    5068 command_runner.go:130] > May 07 19:55:40 multinode-600000 dockerd[1047]: 2024/05/07 19:55:40 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0507 19:55:44.256357    5068 command_runner.go:130] > May 07 19:55:40 multinode-600000 dockerd[1047]: 2024/05/07 19:55:40 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0507 19:55:44.256389    5068 command_runner.go:130] > May 07 19:55:40 multinode-600000 dockerd[1047]: 2024/05/07 19:55:40 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0507 19:55:44.256389    5068 command_runner.go:130] > May 07 19:55:40 multinode-600000 dockerd[1047]: 2024/05/07 19:55:40 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0507 19:55:44.256389    5068 command_runner.go:130] > May 07 19:55:41 multinode-600000 dockerd[1047]: 2024/05/07 19:55:41 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0507 19:55:44.256389    5068 command_runner.go:130] > May 07 19:55:41 multinode-600000 dockerd[1047]: 2024/05/07 19:55:41 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0507 19:55:44.256389    5068 command_runner.go:130] > May 07 19:55:41 multinode-600000 dockerd[1047]: 2024/05/07 19:55:41 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0507 19:55:44.256389    5068 command_runner.go:130] > May 07 19:55:41 multinode-600000 dockerd[1047]: 2024/05/07 19:55:41 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0507 19:55:44.256389    5068 command_runner.go:130] > May 07 19:55:44 multinode-600000 dockerd[1047]: 2024/05/07 19:55:44 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0507 19:55:44.256389    5068 command_runner.go:130] > May 07 19:55:44 multinode-600000 dockerd[1047]: 2024/05/07 19:55:44 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0507 19:55:44.256389    5068 command_runner.go:130] > May 07 19:55:44 multinode-600000 dockerd[1047]: 2024/05/07 19:55:44 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0507 19:55:44.256389    5068 command_runner.go:130] > May 07 19:55:44 multinode-600000 dockerd[1047]: 2024/05/07 19:55:44 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0507 19:55:44.256389    5068 command_runner.go:130] > May 07 19:55:44 multinode-600000 dockerd[1047]: 2024/05/07 19:55:44 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0507 19:55:44.284875    5068 logs.go:123] Gathering logs for container status ...
	I0507 19:55:44.284875    5068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 19:55:44.340929    5068 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0507 19:55:44.340929    5068 command_runner.go:130] > 78ecb8cdfd06c       8c811b4aec35f                                                                                         6 seconds ago        Running             busybox                   1                   f8dc35309168f       busybox-fc5497c4f-gcqlv
	I0507 19:55:44.340929    5068 command_runner.go:130] > d27627c198085       cbb01a7bd410d                                                                                         6 seconds ago        Running             coredns                   1                   56c438bec1777       coredns-7db6d8ff4d-5j966
	I0507 19:55:44.340929    5068 command_runner.go:130] > 4c93a69b2eee4       6e38f40d628db                                                                                         28 seconds ago       Running             storage-provisioner       2                   09d2fda974adf       storage-provisioner
	I0507 19:55:44.340929    5068 command_runner.go:130] > 29b5cae0b8f14       4950bb10b3f87                                                                                         About a minute ago   Running             kindnet-cni               1                   857f6b5630910       kindnet-zw4r9
	I0507 19:55:44.341155    5068 command_runner.go:130] > 5255a972ff6ce       a0bf559e280cf                                                                                         About a minute ago   Running             kube-proxy                1                   deb171c003562       kube-proxy-c9gw5
	I0507 19:55:44.341155    5068 command_runner.go:130] > d1e3e4629bc4a       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   09d2fda974adf       storage-provisioner
	I0507 19:55:44.341227    5068 command_runner.go:130] > 7c95e3addc4b8       c42f13656d0b2                                                                                         About a minute ago   Running             kube-apiserver            0                   fec63580ff266       kube-apiserver-multinode-600000
	I0507 19:55:44.341304    5068 command_runner.go:130] > ac320a872e77c       3861cfcd7c04c                                                                                         About a minute ago   Running             etcd                      0                   c666fba0d0753       etcd-multinode-600000
	I0507 19:55:44.341370    5068 command_runner.go:130] > 922d1e2b87454       c7aad43836fa5                                                                                         About a minute ago   Running             kube-controller-manager   1                   5c37290307d14       kube-controller-manager-multinode-600000
	I0507 19:55:44.341440    5068 command_runner.go:130] > 45341720d5be3       259c8277fcbbc                                                                                         About a minute ago   Running             kube-scheduler            1                   89c8a2313bcaf       kube-scheduler-multinode-600000
	I0507 19:55:44.341440    5068 command_runner.go:130] > 66301c2be7060       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   18 minutes ago       Exited              busybox                   0                   4afb10dc8b115       busybox-fc5497c4f-gcqlv
	I0507 19:55:44.341505    5068 command_runner.go:130] > 9550b237d8d7b       cbb01a7bd410d                                                                                         21 minutes ago       Exited              coredns                   0                   99af61c6e282a       coredns-7db6d8ff4d-5j966
	I0507 19:55:44.341574    5068 command_runner.go:130] > 2d49ad078ed35       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              21 minutes ago       Exited              kindnet-cni               0                   58ebd877d77fb       kindnet-zw4r9
	I0507 19:55:44.341695    5068 command_runner.go:130] > aa9692c1fbd3b       a0bf559e280cf                                                                                         21 minutes ago       Exited              kube-proxy                0                   70cff02905e8f       kube-proxy-c9gw5
	I0507 19:55:44.341759    5068 command_runner.go:130] > 7cefdac2050fa       259c8277fcbbc                                                                                         22 minutes ago       Exited              kube-scheduler            0                   75f27faec2ed6       kube-scheduler-multinode-600000
	I0507 19:55:44.341759    5068 command_runner.go:130] > 3067f16e2e380       c7aad43836fa5                                                                                         22 minutes ago       Exited              kube-controller-manager   0                   af16a92d7c1cc       kube-controller-manager-multinode-600000
	I0507 19:55:44.346027    5068 logs.go:123] Gathering logs for kindnet [2d49ad078ed3] ...
	I0507 19:55:44.346102    5068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d49ad078ed3"
	I0507 19:55:44.377907    5068 command_runner.go:130] ! I0507 19:41:07.116810       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:44.377907    5068 command_runner.go:130] ! I0507 19:41:07.116911       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:44.378447    5068 command_runner.go:130] ! I0507 19:41:07.117095       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:44.378447    5068 command_runner.go:130] ! I0507 19:41:17.123472       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:44.378447    5068 command_runner.go:130] ! I0507 19:41:17.123573       1 main.go:227] handling current node
	I0507 19:55:44.378447    5068 command_runner.go:130] ! I0507 19:41:17.123585       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:44.378447    5068 command_runner.go:130] ! I0507 19:41:17.123594       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:44.378447    5068 command_runner.go:130] ! I0507 19:41:17.124084       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:44.378447    5068 command_runner.go:130] ! I0507 19:41:17.124175       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:44.378544    5068 command_runner.go:130] ! I0507 19:41:27.134971       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:44.378624    5068 command_runner.go:130] ! I0507 19:41:27.135112       1 main.go:227] handling current node
	I0507 19:55:44.378624    5068 command_runner.go:130] ! I0507 19:41:27.135127       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:44.378624    5068 command_runner.go:130] ! I0507 19:41:27.135135       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:44.378624    5068 command_runner.go:130] ! I0507 19:41:27.135337       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:44.378624    5068 command_runner.go:130] ! I0507 19:41:27.135391       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:44.378624    5068 command_runner.go:130] ! I0507 19:41:37.144428       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:44.378624    5068 command_runner.go:130] ! I0507 19:41:37.144529       1 main.go:227] handling current node
	I0507 19:55:44.378624    5068 command_runner.go:130] ! I0507 19:41:37.144541       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:44.378624    5068 command_runner.go:130] ! I0507 19:41:37.144549       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:44.378624    5068 command_runner.go:130] ! I0507 19:41:37.144673       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:44.378624    5068 command_runner.go:130] ! I0507 19:41:37.144698       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:44.378624    5068 command_runner.go:130] ! I0507 19:41:47.154405       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:44.378624    5068 command_runner.go:130] ! I0507 19:41:47.154529       1 main.go:227] handling current node
	I0507 19:55:44.378624    5068 command_runner.go:130] ! I0507 19:41:47.154543       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:44.378624    5068 command_runner.go:130] ! I0507 19:41:47.154551       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:44.378624    5068 command_runner.go:130] ! I0507 19:41:47.155068       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:44.378624    5068 command_runner.go:130] ! I0507 19:41:47.155088       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:44.378624    5068 command_runner.go:130] ! I0507 19:41:57.163844       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:44.378624    5068 command_runner.go:130] ! I0507 19:41:57.163910       1 main.go:227] handling current node
	I0507 19:55:44.378624    5068 command_runner.go:130] ! I0507 19:41:57.163920       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:44.378624    5068 command_runner.go:130] ! I0507 19:41:57.163926       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:44.378624    5068 command_runner.go:130] ! I0507 19:41:57.164061       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:44.378624    5068 command_runner.go:130] ! I0507 19:41:57.164070       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:44.378624    5068 command_runner.go:130] ! I0507 19:42:07.179518       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:44.378624    5068 command_runner.go:130] ! I0507 19:42:07.179623       1 main.go:227] handling current node
	I0507 19:55:44.378624    5068 command_runner.go:130] ! I0507 19:42:07.179635       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:44.378624    5068 command_runner.go:130] ! I0507 19:42:07.179643       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:44.378624    5068 command_runner.go:130] ! I0507 19:42:07.179805       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:44.378624    5068 command_runner.go:130] ! I0507 19:42:07.180030       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:44.378624    5068 command_runner.go:130] ! I0507 19:42:17.193528       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:44.378624    5068 command_runner.go:130] ! I0507 19:42:17.193636       1 main.go:227] handling current node
	I0507 19:55:44.378624    5068 command_runner.go:130] ! I0507 19:42:17.193649       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:44.378624    5068 command_runner.go:130] ! I0507 19:42:17.193657       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:44.378624    5068 command_runner.go:130] ! I0507 19:42:17.194171       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:44.378624    5068 command_runner.go:130] ! I0507 19:42:17.194408       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:44.378624    5068 command_runner.go:130] ! I0507 19:42:27.205877       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:44.378624    5068 command_runner.go:130] ! I0507 19:42:27.205918       1 main.go:227] handling current node
	I0507 19:55:44.378624    5068 command_runner.go:130] ! I0507 19:42:27.205929       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:44.378624    5068 command_runner.go:130] ! I0507 19:42:27.205936       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:44.378624    5068 command_runner.go:130] ! I0507 19:42:27.206343       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:44.378624    5068 command_runner.go:130] ! I0507 19:42:27.206360       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:44.378624    5068 command_runner.go:130] ! I0507 19:42:37.213680       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:44.378624    5068 command_runner.go:130] ! I0507 19:42:37.213766       1 main.go:227] handling current node
	I0507 19:55:44.378624    5068 command_runner.go:130] ! I0507 19:42:37.213780       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:44.378624    5068 command_runner.go:130] ! I0507 19:42:37.213788       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:44.378624    5068 command_runner.go:130] ! I0507 19:42:37.214204       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:44.378624    5068 command_runner.go:130] ! I0507 19:42:37.214303       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:44.378624    5068 command_runner.go:130] ! I0507 19:42:47.224946       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:44.379165    5068 command_runner.go:130] ! I0507 19:42:47.225125       1 main.go:227] handling current node
	I0507 19:55:44.379165    5068 command_runner.go:130] ! I0507 19:42:47.225139       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:44.379165    5068 command_runner.go:130] ! I0507 19:42:47.225148       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:44.379165    5068 command_runner.go:130] ! I0507 19:42:47.225499       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:44.379165    5068 command_runner.go:130] ! I0507 19:42:47.225556       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:44.379226    5068 command_runner.go:130] ! I0507 19:42:57.236504       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:44.379226    5068 command_runner.go:130] ! I0507 19:42:57.236681       1 main.go:227] handling current node
	I0507 19:55:44.379226    5068 command_runner.go:130] ! I0507 19:42:57.236699       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:44.379226    5068 command_runner.go:130] ! I0507 19:42:57.237025       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:44.379226    5068 command_runner.go:130] ! I0507 19:42:57.237359       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:44.379226    5068 command_runner.go:130] ! I0507 19:42:57.237385       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:44.379226    5068 command_runner.go:130] ! I0507 19:43:07.248420       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:44.379226    5068 command_runner.go:130] ! I0507 19:43:07.248600       1 main.go:227] handling current node
	I0507 19:55:44.379226    5068 command_runner.go:130] ! I0507 19:43:07.248614       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:44.379226    5068 command_runner.go:130] ! I0507 19:43:07.248622       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:44.379226    5068 command_runner.go:130] ! I0507 19:43:07.249108       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:44.379226    5068 command_runner.go:130] ! I0507 19:43:07.249189       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:44.379226    5068 command_runner.go:130] ! I0507 19:43:17.265021       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:44.379226    5068 command_runner.go:130] ! I0507 19:43:17.265056       1 main.go:227] handling current node
	I0507 19:55:44.379226    5068 command_runner.go:130] ! I0507 19:43:17.265067       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:44.379226    5068 command_runner.go:130] ! I0507 19:43:17.265074       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:44.379226    5068 command_runner.go:130] ! I0507 19:43:17.265713       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:44.379226    5068 command_runner.go:130] ! I0507 19:43:17.265780       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:44.379226    5068 command_runner.go:130] ! I0507 19:43:27.271270       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:44.379226    5068 command_runner.go:130] ! I0507 19:43:27.271308       1 main.go:227] handling current node
	I0507 19:55:44.379226    5068 command_runner.go:130] ! I0507 19:43:27.271320       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:44.379226    5068 command_runner.go:130] ! I0507 19:43:27.271326       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:44.379226    5068 command_runner.go:130] ! I0507 19:43:27.271684       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:44.379226    5068 command_runner.go:130] ! I0507 19:43:27.271715       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:44.379226    5068 command_runner.go:130] ! I0507 19:43:37.279223       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:44.379226    5068 command_runner.go:130] ! I0507 19:43:37.279323       1 main.go:227] handling current node
	I0507 19:55:44.379226    5068 command_runner.go:130] ! I0507 19:43:37.279336       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:44.379226    5068 command_runner.go:130] ! I0507 19:43:37.279344       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:44.379226    5068 command_runner.go:130] ! I0507 19:43:37.279894       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:44.379226    5068 command_runner.go:130] ! I0507 19:43:37.280039       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:44.379226    5068 command_runner.go:130] ! I0507 19:43:47.292160       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:44.379226    5068 command_runner.go:130] ! I0507 19:43:47.292257       1 main.go:227] handling current node
	I0507 19:55:44.379226    5068 command_runner.go:130] ! I0507 19:43:47.292269       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:44.379226    5068 command_runner.go:130] ! I0507 19:43:47.292276       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:44.379226    5068 command_runner.go:130] ! I0507 19:43:47.292451       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:44.379226    5068 command_runner.go:130] ! I0507 19:43:47.292531       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:44.379226    5068 command_runner.go:130] ! I0507 19:43:57.302957       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:44.379226    5068 command_runner.go:130] ! I0507 19:43:57.303129       1 main.go:227] handling current node
	I0507 19:55:44.379226    5068 command_runner.go:130] ! I0507 19:43:57.303144       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:44.379226    5068 command_runner.go:130] ! I0507 19:43:57.303152       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:44.379226    5068 command_runner.go:130] ! I0507 19:43:57.303598       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:44.379226    5068 command_runner.go:130] ! I0507 19:43:57.303754       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:44.379226    5068 command_runner.go:130] ! I0507 19:44:07.314533       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:44.379766    5068 command_runner.go:130] ! I0507 19:44:07.314565       1 main.go:227] handling current node
	I0507 19:55:44.379766    5068 command_runner.go:130] ! I0507 19:44:07.314575       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:44.379766    5068 command_runner.go:130] ! I0507 19:44:07.314581       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:44.379832    5068 command_runner.go:130] ! I0507 19:44:07.314878       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:44.379832    5068 command_runner.go:130] ! I0507 19:44:07.314965       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:44.379832    5068 command_runner.go:130] ! I0507 19:44:17.330535       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:44.379832    5068 command_runner.go:130] ! I0507 19:44:17.330644       1 main.go:227] handling current node
	I0507 19:55:44.379832    5068 command_runner.go:130] ! I0507 19:44:17.330657       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:44.379896    5068 command_runner.go:130] ! I0507 19:44:17.330665       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:44.379896    5068 command_runner.go:130] ! I0507 19:44:17.330781       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:44.379896    5068 command_runner.go:130] ! I0507 19:44:17.330805       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:44.379896    5068 command_runner.go:130] ! I0507 19:44:27.345226       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:44.379896    5068 command_runner.go:130] ! I0507 19:44:27.345325       1 main.go:227] handling current node
	I0507 19:55:44.379954    5068 command_runner.go:130] ! I0507 19:44:27.345338       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:44.379954    5068 command_runner.go:130] ! I0507 19:44:27.345346       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:44.379954    5068 command_runner.go:130] ! I0507 19:44:27.345594       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:44.379954    5068 command_runner.go:130] ! I0507 19:44:27.345661       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:44.379954    5068 command_runner.go:130] ! I0507 19:44:37.358952       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:44.380009    5068 command_runner.go:130] ! I0507 19:44:37.359029       1 main.go:227] handling current node
	I0507 19:55:44.380009    5068 command_runner.go:130] ! I0507 19:44:37.359041       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:44.380009    5068 command_runner.go:130] ! I0507 19:44:37.359049       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:44.380009    5068 command_runner.go:130] ! I0507 19:44:37.359583       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:44.380009    5068 command_runner.go:130] ! I0507 19:44:37.359942       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:44.380066    5068 command_runner.go:130] ! I0507 19:44:47.372236       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:44.380066    5068 command_runner.go:130] ! I0507 19:44:47.372327       1 main.go:227] handling current node
	I0507 19:55:44.380066    5068 command_runner.go:130] ! I0507 19:44:47.372340       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:44.380066    5068 command_runner.go:130] ! I0507 19:44:47.372347       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:44.380066    5068 command_runner.go:130] ! I0507 19:44:47.372619       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:44.380127    5068 command_runner.go:130] ! I0507 19:44:47.372773       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:44.380127    5068 command_runner.go:130] ! I0507 19:44:57.381408       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:44.380127    5068 command_runner.go:130] ! I0507 19:44:57.381561       1 main.go:227] handling current node
	I0507 19:55:44.380183    5068 command_runner.go:130] ! I0507 19:44:57.381575       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:44.380183    5068 command_runner.go:130] ! I0507 19:44:57.381583       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:44.380183    5068 command_runner.go:130] ! I0507 19:44:57.388779       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:44.380183    5068 command_runner.go:130] ! I0507 19:44:57.388820       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:44.380235    5068 command_runner.go:130] ! I0507 19:45:07.401501       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:44.380235    5068 command_runner.go:130] ! I0507 19:45:07.401539       1 main.go:227] handling current node
	I0507 19:55:44.380235    5068 command_runner.go:130] ! I0507 19:45:07.401551       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:44.380235    5068 command_runner.go:130] ! I0507 19:45:07.401558       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:44.380235    5068 command_runner.go:130] ! I0507 19:45:07.401946       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:44.380292    5068 command_runner.go:130] ! I0507 19:45:07.401971       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:44.380292    5068 command_runner.go:130] ! I0507 19:45:17.412152       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:44.380292    5068 command_runner.go:130] ! I0507 19:45:17.412194       1 main.go:227] handling current node
	I0507 19:55:44.380292    5068 command_runner.go:130] ! I0507 19:45:17.412205       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:44.380292    5068 command_runner.go:130] ! I0507 19:45:17.412546       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:44.380474    5068 command_runner.go:130] ! I0507 19:45:17.412831       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:44.380530    5068 command_runner.go:130] ! I0507 19:45:17.412948       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:44.380530    5068 command_runner.go:130] ! I0507 19:45:27.420776       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:44.380530    5068 command_runner.go:130] ! I0507 19:45:27.420889       1 main.go:227] handling current node
	I0507 19:55:44.380530    5068 command_runner.go:130] ! I0507 19:45:27.420901       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:44.380582    5068 command_runner.go:130] ! I0507 19:45:27.420910       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:44.380582    5068 command_runner.go:130] ! I0507 19:45:27.421607       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:44.380582    5068 command_runner.go:130] ! I0507 19:45:27.421717       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:44.380582    5068 command_runner.go:130] ! I0507 19:45:37.427913       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:44.380632    5068 command_runner.go:130] ! I0507 19:45:37.428076       1 main.go:227] handling current node
	I0507 19:55:44.380632    5068 command_runner.go:130] ! I0507 19:45:37.428090       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:44.380632    5068 command_runner.go:130] ! I0507 19:45:37.428099       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:44.380674    5068 command_runner.go:130] ! I0507 19:45:37.428614       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:44.380674    5068 command_runner.go:130] ! I0507 19:45:37.428647       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:44.380674    5068 command_runner.go:130] ! I0507 19:45:47.434296       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:44.380674    5068 command_runner.go:130] ! I0507 19:45:47.434399       1 main.go:227] handling current node
	I0507 19:55:44.380728    5068 command_runner.go:130] ! I0507 19:45:47.434412       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:44.380728    5068 command_runner.go:130] ! I0507 19:45:47.434420       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:44.380728    5068 command_runner.go:130] ! I0507 19:45:47.434745       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:44.380728    5068 command_runner.go:130] ! I0507 19:45:47.434773       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:44.380728    5068 command_runner.go:130] ! I0507 19:45:57.448460       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:44.380786    5068 command_runner.go:130] ! I0507 19:45:57.448499       1 main.go:227] handling current node
	I0507 19:55:44.380786    5068 command_runner.go:130] ! I0507 19:45:57.448510       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:44.380786    5068 command_runner.go:130] ! I0507 19:45:57.448517       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:44.380786    5068 command_runner.go:130] ! I0507 19:45:57.448949       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:44.380786    5068 command_runner.go:130] ! I0507 19:45:57.448981       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:44.380848    5068 command_runner.go:130] ! I0507 19:46:07.463804       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:44.380848    5068 command_runner.go:130] ! I0507 19:46:07.463844       1 main.go:227] handling current node
	I0507 19:55:44.380848    5068 command_runner.go:130] ! I0507 19:46:07.463855       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:44.380848    5068 command_runner.go:130] ! I0507 19:46:07.463863       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:44.380905    5068 command_runner.go:130] ! I0507 19:46:07.464346       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:44.380905    5068 command_runner.go:130] ! I0507 19:46:07.464378       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:44.380905    5068 command_runner.go:130] ! I0507 19:46:17.480817       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:44.380957    5068 command_runner.go:130] ! I0507 19:46:17.480973       1 main.go:227] handling current node
	I0507 19:55:44.380957    5068 command_runner.go:130] ! I0507 19:46:17.481017       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:44.380957    5068 command_runner.go:130] ! I0507 19:46:17.481027       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:44.381014    5068 command_runner.go:130] ! I0507 19:46:17.481217       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:44.381014    5068 command_runner.go:130] ! I0507 19:46:17.481364       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:44.381014    5068 command_runner.go:130] ! I0507 19:46:27.490098       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:44.381014    5068 command_runner.go:130] ! I0507 19:46:27.490193       1 main.go:227] handling current node
	I0507 19:55:44.381076    5068 command_runner.go:130] ! I0507 19:46:27.490207       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:44.381076    5068 command_runner.go:130] ! I0507 19:46:27.490215       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:44.381076    5068 command_runner.go:130] ! I0507 19:46:27.490319       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:44.381076    5068 command_runner.go:130] ! I0507 19:46:27.490331       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:44.381133    5068 command_runner.go:130] ! I0507 19:46:37.503127       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:44.381133    5068 command_runner.go:130] ! I0507 19:46:37.503153       1 main.go:227] handling current node
	I0507 19:55:44.381133    5068 command_runner.go:130] ! I0507 19:46:37.503164       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:44.381133    5068 command_runner.go:130] ! I0507 19:46:37.503171       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:44.381187    5068 command_runner.go:130] ! I0507 19:46:37.503279       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:44.381187    5068 command_runner.go:130] ! I0507 19:46:37.503286       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:44.381187    5068 command_runner.go:130] ! I0507 19:46:47.514408       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:44.381187    5068 command_runner.go:130] ! I0507 19:46:47.514504       1 main.go:227] handling current node
	I0507 19:55:44.381187    5068 command_runner.go:130] ! I0507 19:46:47.514516       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:44.381244    5068 command_runner.go:130] ! I0507 19:46:47.514524       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:44.381244    5068 command_runner.go:130] ! I0507 19:46:47.514650       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:44.381244    5068 command_runner.go:130] ! I0507 19:46:47.514661       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:44.381244    5068 command_runner.go:130] ! I0507 19:46:57.529281       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:44.381305    5068 command_runner.go:130] ! I0507 19:46:57.529381       1 main.go:227] handling current node
	I0507 19:55:44.381305    5068 command_runner.go:130] ! I0507 19:46:57.529394       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:44.381305    5068 command_runner.go:130] ! I0507 19:46:57.529402       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:44.381305    5068 command_runner.go:130] ! I0507 19:46:57.529689       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:44.381305    5068 command_runner.go:130] ! I0507 19:46:57.529898       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:44.381362    5068 command_runner.go:130] ! I0507 19:47:07.536805       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:44.381362    5068 command_runner.go:130] ! I0507 19:47:07.536841       1 main.go:227] handling current node
	I0507 19:55:44.381362    5068 command_runner.go:130] ! I0507 19:47:07.536852       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:44.381362    5068 command_runner.go:130] ! I0507 19:47:07.536859       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:44.381362    5068 command_runner.go:130] ! I0507 19:47:07.537080       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:44.381416    5068 command_runner.go:130] ! I0507 19:47:07.537103       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:44.381416    5068 command_runner.go:130] ! I0507 19:47:17.551699       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:44.381416    5068 command_runner.go:130] ! I0507 19:47:17.552050       1 main.go:227] handling current node
	I0507 19:55:44.381472    5068 command_runner.go:130] ! I0507 19:47:17.552126       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:44.381472    5068 command_runner.go:130] ! I0507 19:47:17.552206       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:44.381472    5068 command_runner.go:130] ! I0507 19:47:17.552600       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:44.381472    5068 command_runner.go:130] ! I0507 19:47:17.552777       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:44.381472    5068 command_runner.go:130] ! I0507 19:47:27.567122       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:44.381472    5068 command_runner.go:130] ! I0507 19:47:27.567214       1 main.go:227] handling current node
	I0507 19:55:44.381472    5068 command_runner.go:130] ! I0507 19:47:27.567227       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:44.381472    5068 command_runner.go:130] ! I0507 19:47:27.567251       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:44.381556    5068 command_runner.go:130] ! I0507 19:47:27.567365       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:44.381556    5068 command_runner.go:130] ! I0507 19:47:27.567376       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:44.381556    5068 command_runner.go:130] ! I0507 19:47:37.579248       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:44.381556    5068 command_runner.go:130] ! I0507 19:47:37.579334       1 main.go:227] handling current node
	I0507 19:55:44.381556    5068 command_runner.go:130] ! I0507 19:47:37.579346       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:44.381556    5068 command_runner.go:130] ! I0507 19:47:37.579352       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:44.381556    5068 command_runner.go:130] ! I0507 19:47:37.580168       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:44.381556    5068 command_runner.go:130] ! I0507 19:47:37.580202       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:44.381556    5068 command_runner.go:130] ! I0507 19:47:47.591084       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:44.381556    5068 command_runner.go:130] ! I0507 19:47:47.591125       1 main.go:227] handling current node
	I0507 19:55:44.381674    5068 command_runner.go:130] ! I0507 19:47:47.591136       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:44.381674    5068 command_runner.go:130] ! I0507 19:47:47.591143       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:44.381715    5068 command_runner.go:130] ! I0507 19:47:47.591350       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:44.381715    5068 command_runner.go:130] ! I0507 19:47:47.591365       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:44.381715    5068 command_runner.go:130] ! I0507 19:47:57.599687       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:44.381715    5068 command_runner.go:130] ! I0507 19:47:57.599780       1 main.go:227] handling current node
	I0507 19:55:44.381794    5068 command_runner.go:130] ! I0507 19:47:57.600282       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:44.381794    5068 command_runner.go:130] ! I0507 19:47:57.600376       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:44.381794    5068 command_runner.go:130] ! I0507 19:47:57.600829       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:44.381794    5068 command_runner.go:130] ! I0507 19:47:57.601089       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:44.381853    5068 command_runner.go:130] ! I0507 19:48:07.608877       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:44.381853    5068 command_runner.go:130] ! I0507 19:48:07.608973       1 main.go:227] handling current node
	I0507 19:55:44.381853    5068 command_runner.go:130] ! I0507 19:48:07.609012       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:44.381885    5068 command_runner.go:130] ! I0507 19:48:07.609021       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:44.381885    5068 command_runner.go:130] ! I0507 19:48:07.609341       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:44.381885    5068 command_runner.go:130] ! I0507 19:48:07.609437       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:44.381885    5068 command_runner.go:130] ! I0507 19:48:17.616839       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:44.381885    5068 command_runner.go:130] ! I0507 19:48:17.616948       1 main.go:227] handling current node
	I0507 19:55:44.381885    5068 command_runner.go:130] ! I0507 19:48:17.616962       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:44.381885    5068 command_runner.go:130] ! I0507 19:48:17.616970       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:44.381885    5068 command_runner.go:130] ! I0507 19:48:17.617201       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:44.381885    5068 command_runner.go:130] ! I0507 19:48:17.617302       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:44.381885    5068 command_runner.go:130] ! I0507 19:48:27.622610       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:44.381977    5068 command_runner.go:130] ! I0507 19:48:27.622773       1 main.go:227] handling current node
	I0507 19:55:44.381977    5068 command_runner.go:130] ! I0507 19:48:27.622786       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:44.381977    5068 command_runner.go:130] ! I0507 19:48:27.622794       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:44.381977    5068 command_runner.go:130] ! I0507 19:48:27.622907       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:44.382019    5068 command_runner.go:130] ! I0507 19:48:27.622913       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:44.382019    5068 command_runner.go:130] ! I0507 19:48:37.635466       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:44.382019    5068 command_runner.go:130] ! I0507 19:48:37.635567       1 main.go:227] handling current node
	I0507 19:55:44.382086    5068 command_runner.go:130] ! I0507 19:48:37.635581       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:44.382086    5068 command_runner.go:130] ! I0507 19:48:37.635588       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:44.382086    5068 command_runner.go:130] ! I0507 19:48:37.635708       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:44.382086    5068 command_runner.go:130] ! I0507 19:48:37.635731       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:44.382086    5068 command_runner.go:130] ! I0507 19:48:47.648680       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:44.382170    5068 command_runner.go:130] ! I0507 19:48:47.648719       1 main.go:227] handling current node
	I0507 19:55:44.382170    5068 command_runner.go:130] ! I0507 19:48:47.648730       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:44.382170    5068 command_runner.go:130] ! I0507 19:48:47.648736       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:44.382170    5068 command_runner.go:130] ! I0507 19:48:47.649047       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:44.382229    5068 command_runner.go:130] ! I0507 19:48:47.649073       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:44.382229    5068 command_runner.go:130] ! I0507 19:48:57.661624       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:44.382229    5068 command_runner.go:130] ! I0507 19:48:57.661723       1 main.go:227] handling current node
	I0507 19:55:44.382229    5068 command_runner.go:130] ! I0507 19:48:57.661736       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:44.382229    5068 command_runner.go:130] ! I0507 19:48:57.661745       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:44.382229    5068 command_runner.go:130] ! I0507 19:48:57.661906       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:44.382229    5068 command_runner.go:130] ! I0507 19:48:57.661973       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:44.382229    5068 command_runner.go:130] ! I0507 19:49:07.670042       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:44.382294    5068 command_runner.go:130] ! I0507 19:49:07.670434       1 main.go:227] handling current node
	I0507 19:55:44.382294    5068 command_runner.go:130] ! I0507 19:49:07.670598       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:44.382294    5068 command_runner.go:130] ! I0507 19:49:07.670611       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:44.382294    5068 command_runner.go:130] ! I0507 19:49:07.670874       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:44.382294    5068 command_runner.go:130] ! I0507 19:49:07.670892       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:44.382294    5068 command_runner.go:130] ! I0507 19:49:17.688752       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:44.382294    5068 command_runner.go:130] ! I0507 19:49:17.688862       1 main.go:227] handling current node
	I0507 19:55:44.382406    5068 command_runner.go:130] ! I0507 19:49:17.689132       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:44.382406    5068 command_runner.go:130] ! I0507 19:49:17.689148       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:44.382406    5068 command_runner.go:130] ! I0507 19:49:17.689445       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:44.382406    5068 command_runner.go:130] ! I0507 19:49:17.689461       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:44.382406    5068 command_runner.go:130] ! I0507 19:49:27.703795       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:44.382406    5068 command_runner.go:130] ! I0507 19:49:27.703825       1 main.go:227] handling current node
	I0507 19:55:44.382406    5068 command_runner.go:130] ! I0507 19:49:27.703838       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:44.382406    5068 command_runner.go:130] ! I0507 19:49:27.703846       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:44.382406    5068 command_runner.go:130] ! I0507 19:49:27.704329       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:44.382406    5068 command_runner.go:130] ! I0507 19:49:27.704365       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:44.382406    5068 command_runner.go:130] ! I0507 19:49:37.711372       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:44.382406    5068 command_runner.go:130] ! I0507 19:49:37.711497       1 main.go:227] handling current node
	I0507 19:55:44.382527    5068 command_runner.go:130] ! I0507 19:49:37.711514       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:44.382527    5068 command_runner.go:130] ! I0507 19:49:37.711524       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:44.382527    5068 command_runner.go:130] ! I0507 19:49:37.711882       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:44.382569    5068 command_runner.go:130] ! I0507 19:49:37.711917       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:44.382569    5068 command_runner.go:130] ! I0507 19:49:47.727743       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:44.382569    5068 command_runner.go:130] ! I0507 19:49:47.727786       1 main.go:227] handling current node
	I0507 19:55:44.382569    5068 command_runner.go:130] ! I0507 19:49:47.727798       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:44.382569    5068 command_runner.go:130] ! I0507 19:49:47.727806       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:44.382569    5068 command_runner.go:130] ! I0507 19:49:47.728278       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:44.382569    5068 command_runner.go:130] ! I0507 19:49:47.728401       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:44.382569    5068 command_runner.go:130] ! I0507 19:49:57.734796       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:44.382662    5068 command_runner.go:130] ! I0507 19:49:57.734892       1 main.go:227] handling current node
	I0507 19:55:44.382662    5068 command_runner.go:130] ! I0507 19:49:57.734905       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:44.382662    5068 command_runner.go:130] ! I0507 19:49:57.734913       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:44.382662    5068 command_runner.go:130] ! I0507 19:49:57.735055       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:44.382704    5068 command_runner.go:130] ! I0507 19:49:57.735077       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:44.382704    5068 command_runner.go:130] ! I0507 19:50:07.747486       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:44.382704    5068 command_runner.go:130] ! I0507 19:50:07.747598       1 main.go:227] handling current node
	I0507 19:55:44.382704    5068 command_runner.go:130] ! I0507 19:50:07.747612       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:44.382704    5068 command_runner.go:130] ! I0507 19:50:07.747621       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:44.382704    5068 command_runner.go:130] ! I0507 19:50:07.748185       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:44.382704    5068 command_runner.go:130] ! I0507 19:50:07.748222       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:44.382785    5068 command_runner.go:130] ! I0507 19:50:17.755602       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:44.382785    5068 command_runner.go:130] ! I0507 19:50:17.755761       1 main.go:227] handling current node
	I0507 19:55:44.382785    5068 command_runner.go:130] ! I0507 19:50:17.755774       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:44.382785    5068 command_runner.go:130] ! I0507 19:50:17.755782       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:44.382785    5068 command_runner.go:130] ! I0507 19:50:17.756227       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:44.382835    5068 command_runner.go:130] ! I0507 19:50:17.756267       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:44.382835    5068 command_runner.go:130] ! I0507 19:50:27.770562       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:44.382835    5068 command_runner.go:130] ! I0507 19:50:27.770678       1 main.go:227] handling current node
	I0507 19:55:44.382835    5068 command_runner.go:130] ! I0507 19:50:27.770692       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:44.382835    5068 command_runner.go:130] ! I0507 19:50:27.770700       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:44.382835    5068 command_runner.go:130] ! I0507 19:50:27.775735       1 main.go:223] Handling node with IPs: map[172.19.129.4:{}]
	I0507 19:55:44.382835    5068 command_runner.go:130] ! I0507 19:50:27.775767       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.3.0/24] 
	I0507 19:55:44.382920    5068 command_runner.go:130] ! I0507 19:50:27.775839       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.19.129.4 Flags: [] Table: 0} 
	I0507 19:55:44.382920    5068 command_runner.go:130] ! I0507 19:50:37.783936       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:44.382920    5068 command_runner.go:130] ! I0507 19:50:37.787174       1 main.go:227] handling current node
	I0507 19:55:44.382920    5068 command_runner.go:130] ! I0507 19:50:37.787394       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:44.382920    5068 command_runner.go:130] ! I0507 19:50:37.787449       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:44.382920    5068 command_runner.go:130] ! I0507 19:50:37.787687       1 main.go:223] Handling node with IPs: map[172.19.129.4:{}]
	I0507 19:55:44.382920    5068 command_runner.go:130] ! I0507 19:50:37.787791       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.3.0/24] 
	I0507 19:55:44.382920    5068 command_runner.go:130] ! I0507 19:50:47.804388       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:44.382920    5068 command_runner.go:130] ! I0507 19:50:47.804423       1 main.go:227] handling current node
	I0507 19:55:44.382920    5068 command_runner.go:130] ! I0507 19:50:47.804434       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:44.383040    5068 command_runner.go:130] ! I0507 19:50:47.804441       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:44.383040    5068 command_runner.go:130] ! I0507 19:50:47.805320       1 main.go:223] Handling node with IPs: map[172.19.129.4:{}]
	I0507 19:55:44.383082    5068 command_runner.go:130] ! I0507 19:50:47.805405       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.3.0/24] 
	I0507 19:55:44.383082    5068 command_runner.go:130] ! I0507 19:50:57.817550       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:44.383082    5068 command_runner.go:130] ! I0507 19:50:57.817645       1 main.go:227] handling current node
	I0507 19:55:44.383082    5068 command_runner.go:130] ! I0507 19:50:57.817660       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:44.383082    5068 command_runner.go:130] ! I0507 19:50:57.817668       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:44.383082    5068 command_runner.go:130] ! I0507 19:50:57.817802       1 main.go:223] Handling node with IPs: map[172.19.129.4:{}]
	I0507 19:55:44.383082    5068 command_runner.go:130] ! I0507 19:50:57.817829       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.3.0/24] 
	I0507 19:55:44.383170    5068 command_runner.go:130] ! I0507 19:51:07.829324       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:44.383170    5068 command_runner.go:130] ! I0507 19:51:07.829427       1 main.go:227] handling current node
	I0507 19:55:44.383170    5068 command_runner.go:130] ! I0507 19:51:07.829440       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:44.383170    5068 command_runner.go:130] ! I0507 19:51:07.829449       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:44.383170    5068 command_runner.go:130] ! I0507 19:51:07.829931       1 main.go:223] Handling node with IPs: map[172.19.129.4:{}]
	I0507 19:55:44.383170    5068 command_runner.go:130] ! I0507 19:51:07.830095       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.3.0/24] 
	I0507 19:55:44.383170    5068 command_runner.go:130] ! I0507 19:51:17.844953       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:44.383252    5068 command_runner.go:130] ! I0507 19:51:17.845032       1 main.go:227] handling current node
	I0507 19:55:44.383252    5068 command_runner.go:130] ! I0507 19:51:17.845046       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:44.383252    5068 command_runner.go:130] ! I0507 19:51:17.845128       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:44.383252    5068 command_runner.go:130] ! I0507 19:51:17.845337       1 main.go:223] Handling node with IPs: map[172.19.129.4:{}]
	I0507 19:55:44.383252    5068 command_runner.go:130] ! I0507 19:51:17.845367       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.3.0/24] 
	I0507 19:55:44.383252    5068 command_runner.go:130] ! I0507 19:51:27.851575       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:44.383325    5068 command_runner.go:130] ! I0507 19:51:27.851686       1 main.go:227] handling current node
	I0507 19:55:44.383325    5068 command_runner.go:130] ! I0507 19:51:27.851698       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:44.383325    5068 command_runner.go:130] ! I0507 19:51:27.851706       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:44.383410    5068 command_runner.go:130] ! I0507 19:51:27.852455       1 main.go:223] Handling node with IPs: map[172.19.129.4:{}]
	I0507 19:55:44.383443    5068 command_runner.go:130] ! I0507 19:51:27.852540       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.3.0/24] 
	I0507 19:55:44.383443    5068 command_runner.go:130] ! I0507 19:51:37.859761       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:44.383443    5068 command_runner.go:130] ! I0507 19:51:37.859857       1 main.go:227] handling current node
	I0507 19:55:44.383443    5068 command_runner.go:130] ! I0507 19:51:37.859871       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:44.383443    5068 command_runner.go:130] ! I0507 19:51:37.859930       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:44.383443    5068 command_runner.go:130] ! I0507 19:51:37.860319       1 main.go:223] Handling node with IPs: map[172.19.129.4:{}]
	I0507 19:55:44.383443    5068 command_runner.go:130] ! I0507 19:51:37.860413       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.3.0/24] 
	I0507 19:55:44.383443    5068 command_runner.go:130] ! I0507 19:51:47.872402       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:44.383443    5068 command_runner.go:130] ! I0507 19:51:47.872506       1 main.go:227] handling current node
	I0507 19:55:44.383537    5068 command_runner.go:130] ! I0507 19:51:47.872520       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:44.383537    5068 command_runner.go:130] ! I0507 19:51:47.872528       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:44.383537    5068 command_runner.go:130] ! I0507 19:51:47.872641       1 main.go:223] Handling node with IPs: map[172.19.129.4:{}]
	I0507 19:55:44.383579    5068 command_runner.go:130] ! I0507 19:51:47.872692       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.3.0/24] 
	I0507 19:55:44.383579    5068 command_runner.go:130] ! I0507 19:51:57.885508       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:44.383579    5068 command_runner.go:130] ! I0507 19:51:57.885541       1 main.go:227] handling current node
	I0507 19:55:44.383579    5068 command_runner.go:130] ! I0507 19:51:57.885551       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:44.383579    5068 command_runner.go:130] ! I0507 19:51:57.885556       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:44.383579    5068 command_runner.go:130] ! I0507 19:51:57.885664       1 main.go:223] Handling node with IPs: map[172.19.129.4:{}]
	I0507 19:55:44.383579    5068 command_runner.go:130] ! I0507 19:51:57.885730       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.3.0/24] 
	I0507 19:55:44.383579    5068 command_runner.go:130] ! I0507 19:52:07.898773       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:44.383579    5068 command_runner.go:130] ! I0507 19:52:07.899054       1 main.go:227] handling current node
	I0507 19:55:44.383673    5068 command_runner.go:130] ! I0507 19:52:07.899142       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:44.383673    5068 command_runner.go:130] ! I0507 19:52:07.899258       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:44.383673    5068 command_runner.go:130] ! I0507 19:52:07.899556       1 main.go:223] Handling node with IPs: map[172.19.129.4:{}]
	I0507 19:55:44.383673    5068 command_runner.go:130] ! I0507 19:52:07.899651       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.3.0/24] 
	I0507 19:55:44.400680    5068 logs.go:123] Gathering logs for describe nodes ...
	I0507 19:55:44.400680    5068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 19:55:44.590892    5068 command_runner.go:130] > Name:               multinode-600000
	I0507 19:55:44.590892    5068 command_runner.go:130] > Roles:              control-plane
	I0507 19:55:44.590892    5068 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0507 19:55:44.590892    5068 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0507 19:55:44.590892    5068 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0507 19:55:44.590892    5068 command_runner.go:130] >                     kubernetes.io/hostname=multinode-600000
	I0507 19:55:44.591001    5068 command_runner.go:130] >                     kubernetes.io/os=linux
	I0507 19:55:44.591001    5068 command_runner.go:130] >                     minikube.k8s.io/commit=a2bee053733709aad5480b65159f65519e411d9f
	I0507 19:55:44.591001    5068 command_runner.go:130] >                     minikube.k8s.io/name=multinode-600000
	I0507 19:55:44.591001    5068 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0507 19:55:44.591057    5068 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_05_07T19_33_45_0700
	I0507 19:55:44.591057    5068 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.0
	I0507 19:55:44.591057    5068 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0507 19:55:44.591057    5068 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0507 19:55:44.591057    5068 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0507 19:55:44.591057    5068 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0507 19:55:44.591130    5068 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0507 19:55:44.591130    5068 command_runner.go:130] > CreationTimestamp:  Tue, 07 May 2024 19:33:41 +0000
	I0507 19:55:44.591130    5068 command_runner.go:130] > Taints:             <none>
	I0507 19:55:44.591130    5068 command_runner.go:130] > Unschedulable:      false
	I0507 19:55:44.591189    5068 command_runner.go:130] > Lease:
	I0507 19:55:44.591189    5068 command_runner.go:130] >   HolderIdentity:  multinode-600000
	I0507 19:55:44.591189    5068 command_runner.go:130] >   AcquireTime:     <unset>
	I0507 19:55:44.591302    5068 command_runner.go:130] >   RenewTime:       Tue, 07 May 2024 19:55:35 +0000
	I0507 19:55:44.591302    5068 command_runner.go:130] > Conditions:
	I0507 19:55:44.591302    5068 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0507 19:55:44.591381    5068 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0507 19:55:44.591381    5068 command_runner.go:130] >   MemoryPressure   False   Tue, 07 May 2024 19:55:09 +0000   Tue, 07 May 2024 19:33:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0507 19:55:44.591425    5068 command_runner.go:130] >   DiskPressure     False   Tue, 07 May 2024 19:55:09 +0000   Tue, 07 May 2024 19:33:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0507 19:55:44.591463    5068 command_runner.go:130] >   PIDPressure      False   Tue, 07 May 2024 19:55:09 +0000   Tue, 07 May 2024 19:33:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0507 19:55:44.591497    5068 command_runner.go:130] >   Ready            True    Tue, 07 May 2024 19:55:09 +0000   Tue, 07 May 2024 19:55:09 +0000   KubeletReady                 kubelet is posting ready status
	I0507 19:55:44.591497    5068 command_runner.go:130] > Addresses:
	I0507 19:55:44.591497    5068 command_runner.go:130] >   InternalIP:  172.19.135.22
	I0507 19:55:44.591497    5068 command_runner.go:130] >   Hostname:    multinode-600000
	I0507 19:55:44.591497    5068 command_runner.go:130] > Capacity:
	I0507 19:55:44.591497    5068 command_runner.go:130] >   cpu:                2
	I0507 19:55:44.591497    5068 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0507 19:55:44.591497    5068 command_runner.go:130] >   hugepages-2Mi:      0
	I0507 19:55:44.591497    5068 command_runner.go:130] >   memory:             2164264Ki
	I0507 19:55:44.591497    5068 command_runner.go:130] >   pods:               110
	I0507 19:55:44.591497    5068 command_runner.go:130] > Allocatable:
	I0507 19:55:44.591497    5068 command_runner.go:130] >   cpu:                2
	I0507 19:55:44.591497    5068 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0507 19:55:44.591497    5068 command_runner.go:130] >   hugepages-2Mi:      0
	I0507 19:55:44.591497    5068 command_runner.go:130] >   memory:             2164264Ki
	I0507 19:55:44.591497    5068 command_runner.go:130] >   pods:               110
	I0507 19:55:44.591497    5068 command_runner.go:130] > System Info:
	I0507 19:55:44.591497    5068 command_runner.go:130] >   Machine ID:                 fa6f1530e0ab4546b96ea753f13add46
	I0507 19:55:44.591497    5068 command_runner.go:130] >   System UUID:                f3433f71-57fc-a747-9f8d-4f98c0c4b458
	I0507 19:55:44.591497    5068 command_runner.go:130] >   Boot ID:                    93b81312-340b-4997-83aa-cdf61cfe3dbf
	I0507 19:55:44.591497    5068 command_runner.go:130] >   Kernel Version:             5.10.207
	I0507 19:55:44.591497    5068 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0507 19:55:44.591497    5068 command_runner.go:130] >   Operating System:           linux
	I0507 19:55:44.591497    5068 command_runner.go:130] >   Architecture:               amd64
	I0507 19:55:44.591497    5068 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0507 19:55:44.591497    5068 command_runner.go:130] >   Kubelet Version:            v1.30.0
	I0507 19:55:44.591497    5068 command_runner.go:130] >   Kube-Proxy Version:         v1.30.0
	I0507 19:55:44.591497    5068 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0507 19:55:44.591497    5068 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0507 19:55:44.591497    5068 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0507 19:55:44.591497    5068 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0507 19:55:44.591497    5068 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0507 19:55:44.591497    5068 command_runner.go:130] >   default                     busybox-fc5497c4f-gcqlv                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	I0507 19:55:44.591497    5068 command_runner.go:130] >   kube-system                 coredns-7db6d8ff4d-5j966                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     21m
	I0507 19:55:44.591497    5068 command_runner.go:130] >   kube-system                 etcd-multinode-600000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         71s
	I0507 19:55:44.591497    5068 command_runner.go:130] >   kube-system                 kindnet-zw4r9                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      21m
	I0507 19:55:44.591497    5068 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-600000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         69s
	I0507 19:55:44.591497    5068 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-600000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	I0507 19:55:44.591497    5068 command_runner.go:130] >   kube-system                 kube-proxy-c9gw5                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	I0507 19:55:44.591497    5068 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-600000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	I0507 19:55:44.591497    5068 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	I0507 19:55:44.591497    5068 command_runner.go:130] > Allocated resources:
	I0507 19:55:44.592029    5068 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0507 19:55:44.592029    5068 command_runner.go:130] >   Resource           Requests     Limits
	I0507 19:55:44.592029    5068 command_runner.go:130] >   --------           --------     ------
	I0507 19:55:44.592029    5068 command_runner.go:130] >   cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	I0507 19:55:44.592076    5068 command_runner.go:130] >   memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	I0507 19:55:44.592192    5068 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0507 19:55:44.592192    5068 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0507 19:55:44.592192    5068 command_runner.go:130] > Events:
	I0507 19:55:44.592192    5068 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0507 19:55:44.592192    5068 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0507 19:55:44.592192    5068 command_runner.go:130] >   Normal  Starting                 21m                kube-proxy       
	I0507 19:55:44.592192    5068 command_runner.go:130] >   Normal  Starting                 68s                kube-proxy       
	I0507 19:55:44.592192    5068 command_runner.go:130] >   Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node multinode-600000 status is now: NodeHasSufficientMemory
	I0507 19:55:44.592192    5068 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node multinode-600000 status is now: NodeHasNoDiskPressure
	I0507 19:55:44.592192    5068 command_runner.go:130] >   Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node multinode-600000 status is now: NodeHasSufficientPID
	I0507 19:55:44.592192    5068 command_runner.go:130] >   Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	I0507 19:55:44.592192    5068 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    22m                kubelet          Node multinode-600000 status is now: NodeHasNoDiskPressure
	I0507 19:55:44.592192    5068 command_runner.go:130] >   Normal  NodeHasSufficientMemory  22m                kubelet          Node multinode-600000 status is now: NodeHasSufficientMemory
	I0507 19:55:44.592192    5068 command_runner.go:130] >   Normal  NodeHasSufficientPID     22m                kubelet          Node multinode-600000 status is now: NodeHasSufficientPID
	I0507 19:55:44.592192    5068 command_runner.go:130] >   Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	I0507 19:55:44.592192    5068 command_runner.go:130] >   Normal  Starting                 22m                kubelet          Starting kubelet.
	I0507 19:55:44.592192    5068 command_runner.go:130] >   Normal  RegisteredNode           21m                node-controller  Node multinode-600000 event: Registered Node multinode-600000 in Controller
	I0507 19:55:44.592192    5068 command_runner.go:130] >   Normal  NodeReady                21m                kubelet          Node multinode-600000 status is now: NodeReady
	I0507 19:55:44.592192    5068 command_runner.go:130] >   Normal  Starting                 76s                kubelet          Starting kubelet.
	I0507 19:55:44.592192    5068 command_runner.go:130] >   Normal  NodeHasSufficientPID     76s (x7 over 76s)  kubelet          Node multinode-600000 status is now: NodeHasSufficientPID
	I0507 19:55:44.592192    5068 command_runner.go:130] >   Normal  NodeAllocatableEnforced  76s                kubelet          Updated Node Allocatable limit across pods
	I0507 19:55:44.592192    5068 command_runner.go:130] >   Normal  NodeHasSufficientMemory  75s (x8 over 76s)  kubelet          Node multinode-600000 status is now: NodeHasSufficientMemory
	I0507 19:55:44.592192    5068 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    75s (x8 over 76s)  kubelet          Node multinode-600000 status is now: NodeHasNoDiskPressure
	I0507 19:55:44.592192    5068 command_runner.go:130] >   Normal  RegisteredNode           58s                node-controller  Node multinode-600000 event: Registered Node multinode-600000 in Controller
	I0507 19:55:44.592192    5068 command_runner.go:130] > Name:               multinode-600000-m02
	I0507 19:55:44.592192    5068 command_runner.go:130] > Roles:              <none>
	I0507 19:55:44.592192    5068 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0507 19:55:44.592192    5068 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0507 19:55:44.592192    5068 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0507 19:55:44.592192    5068 command_runner.go:130] >                     kubernetes.io/hostname=multinode-600000-m02
	I0507 19:55:44.592192    5068 command_runner.go:130] >                     kubernetes.io/os=linux
	I0507 19:55:44.592192    5068 command_runner.go:130] >                     minikube.k8s.io/commit=a2bee053733709aad5480b65159f65519e411d9f
	I0507 19:55:44.592192    5068 command_runner.go:130] >                     minikube.k8s.io/name=multinode-600000
	I0507 19:55:44.592192    5068 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0507 19:55:44.592192    5068 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_05_07T19_36_40_0700
	I0507 19:55:44.592192    5068 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.0
	I0507 19:55:44.592192    5068 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0507 19:55:44.592192    5068 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0507 19:55:44.592722    5068 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0507 19:55:44.592767    5068 command_runner.go:130] > CreationTimestamp:  Tue, 07 May 2024 19:36:39 +0000
	I0507 19:55:44.592767    5068 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0507 19:55:44.592767    5068 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0507 19:55:44.592767    5068 command_runner.go:130] > Unschedulable:      false
	I0507 19:55:44.592767    5068 command_runner.go:130] > Lease:
	I0507 19:55:44.592767    5068 command_runner.go:130] >   HolderIdentity:  multinode-600000-m02
	I0507 19:55:44.592825    5068 command_runner.go:130] >   AcquireTime:     <unset>
	I0507 19:55:44.592825    5068 command_runner.go:130] >   RenewTime:       Tue, 07 May 2024 19:51:38 +0000
	I0507 19:55:44.592861    5068 command_runner.go:130] > Conditions:
	I0507 19:55:44.592893    5068 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0507 19:55:44.592908    5068 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0507 19:55:44.592932    5068 command_runner.go:130] >   MemoryPressure   Unknown   Tue, 07 May 2024 19:47:54 +0000   Tue, 07 May 2024 19:55:26 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0507 19:55:44.592972    5068 command_runner.go:130] >   DiskPressure     Unknown   Tue, 07 May 2024 19:47:54 +0000   Tue, 07 May 2024 19:55:26 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0507 19:55:44.592972    5068 command_runner.go:130] >   PIDPressure      Unknown   Tue, 07 May 2024 19:47:54 +0000   Tue, 07 May 2024 19:55:26 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0507 19:55:44.593012    5068 command_runner.go:130] >   Ready            Unknown   Tue, 07 May 2024 19:47:54 +0000   Tue, 07 May 2024 19:55:26 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0507 19:55:44.593012    5068 command_runner.go:130] > Addresses:
	I0507 19:55:44.593012    5068 command_runner.go:130] >   InternalIP:  172.19.143.144
	I0507 19:55:44.593046    5068 command_runner.go:130] >   Hostname:    multinode-600000-m02
	I0507 19:55:44.593046    5068 command_runner.go:130] > Capacity:
	I0507 19:55:44.593046    5068 command_runner.go:130] >   cpu:                2
	I0507 19:55:44.593046    5068 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0507 19:55:44.593086    5068 command_runner.go:130] >   hugepages-2Mi:      0
	I0507 19:55:44.593086    5068 command_runner.go:130] >   memory:             2164264Ki
	I0507 19:55:44.593086    5068 command_runner.go:130] >   pods:               110
	I0507 19:55:44.593086    5068 command_runner.go:130] > Allocatable:
	I0507 19:55:44.593126    5068 command_runner.go:130] >   cpu:                2
	I0507 19:55:44.593126    5068 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0507 19:55:44.593126    5068 command_runner.go:130] >   hugepages-2Mi:      0
	I0507 19:55:44.593158    5068 command_runner.go:130] >   memory:             2164264Ki
	I0507 19:55:44.593440    5068 command_runner.go:130] >   pods:               110
	I0507 19:55:44.593512    5068 command_runner.go:130] > System Info:
	I0507 19:55:44.593552    5068 command_runner.go:130] >   Machine ID:                 34eb4e78cde1423b93517d0087c85f3c
	I0507 19:55:44.593591    5068 command_runner.go:130] >   System UUID:                7ed694c3-4cb4-954c-b244-d0ff36461420
	I0507 19:55:44.593624    5068 command_runner.go:130] >   Boot ID:                    6dd39eeb-a923-4a09-93c8-8c26dd122d68
	I0507 19:55:44.593676    5068 command_runner.go:130] >   Kernel Version:             5.10.207
	I0507 19:55:44.593676    5068 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0507 19:55:44.593712    5068 command_runner.go:130] >   Operating System:           linux
	I0507 19:55:44.593759    5068 command_runner.go:130] >   Architecture:               amd64
	I0507 19:55:44.593787    5068 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0507 19:55:44.593787    5068 command_runner.go:130] >   Kubelet Version:            v1.30.0
	I0507 19:55:44.593820    5068 command_runner.go:130] >   Kube-Proxy Version:         v1.30.0
	I0507 19:55:44.593820    5068 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0507 19:55:44.593820    5068 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0507 19:55:44.593905    5068 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0507 19:55:44.593979    5068 command_runner.go:130] >   Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0507 19:55:44.594059    5068 command_runner.go:130] >   ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	I0507 19:55:44.594096    5068 command_runner.go:130] >   default                     busybox-fc5497c4f-cpw2r    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	I0507 19:55:44.594123    5068 command_runner.go:130] >   kube-system                 kindnet-jmlw2              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      19m
	I0507 19:55:44.594123    5068 command_runner.go:130] >   kube-system                 kube-proxy-9fb6t           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	I0507 19:55:44.594123    5068 command_runner.go:130] > Allocated resources:
	I0507 19:55:44.594123    5068 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0507 19:55:44.594123    5068 command_runner.go:130] >   Resource           Requests   Limits
	I0507 19:55:44.594123    5068 command_runner.go:130] >   --------           --------   ------
	I0507 19:55:44.594123    5068 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0507 19:55:44.594123    5068 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0507 19:55:44.594123    5068 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0507 19:55:44.594123    5068 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0507 19:55:44.594123    5068 command_runner.go:130] > Events:
	I0507 19:55:44.594123    5068 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0507 19:55:44.594123    5068 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0507 19:55:44.594123    5068 command_runner.go:130] >   Normal  Starting                 18m                kube-proxy       
	I0507 19:55:44.594123    5068 command_runner.go:130] >   Normal  NodeHasSufficientMemory  19m (x2 over 19m)  kubelet          Node multinode-600000-m02 status is now: NodeHasSufficientMemory
	I0507 19:55:44.594123    5068 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    19m (x2 over 19m)  kubelet          Node multinode-600000-m02 status is now: NodeHasNoDiskPressure
	I0507 19:55:44.594123    5068 command_runner.go:130] >   Normal  NodeHasSufficientPID     19m (x2 over 19m)  kubelet          Node multinode-600000-m02 status is now: NodeHasSufficientPID
	I0507 19:55:44.594123    5068 command_runner.go:130] >   Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	I0507 19:55:44.594123    5068 command_runner.go:130] >   Normal  RegisteredNode           19m                node-controller  Node multinode-600000-m02 event: Registered Node multinode-600000-m02 in Controller
	I0507 19:55:44.594123    5068 command_runner.go:130] >   Normal  NodeReady                18m                kubelet          Node multinode-600000-m02 status is now: NodeReady
	I0507 19:55:44.594123    5068 command_runner.go:130] >   Normal  RegisteredNode           58s                node-controller  Node multinode-600000-m02 event: Registered Node multinode-600000-m02 in Controller
	I0507 19:55:44.594123    5068 command_runner.go:130] >   Normal  NodeNotReady             18s                node-controller  Node multinode-600000-m02 status is now: NodeNotReady
	I0507 19:55:44.594123    5068 command_runner.go:130] > Name:               multinode-600000-m03
	I0507 19:55:44.594123    5068 command_runner.go:130] > Roles:              <none>
	I0507 19:55:44.594651    5068 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0507 19:55:44.594651    5068 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0507 19:55:44.594695    5068 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0507 19:55:44.594735    5068 command_runner.go:130] >                     kubernetes.io/hostname=multinode-600000-m03
	I0507 19:55:44.594735    5068 command_runner.go:130] >                     kubernetes.io/os=linux
	I0507 19:55:44.594775    5068 command_runner.go:130] >                     minikube.k8s.io/commit=a2bee053733709aad5480b65159f65519e411d9f
	I0507 19:55:44.594808    5068 command_runner.go:130] >                     minikube.k8s.io/name=multinode-600000
	I0507 19:55:44.594808    5068 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0507 19:55:44.594808    5068 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_05_07T19_50_26_0700
	I0507 19:55:44.594888    5068 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.0
	I0507 19:55:44.594921    5068 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0507 19:55:44.594921    5068 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0507 19:55:44.594961    5068 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0507 19:55:44.594961    5068 command_runner.go:130] > CreationTimestamp:  Tue, 07 May 2024 19:50:25 +0000
	I0507 19:55:44.595001    5068 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0507 19:55:44.595034    5068 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0507 19:55:44.595074    5068 command_runner.go:130] > Unschedulable:      false
	I0507 19:55:44.595074    5068 command_runner.go:130] > Lease:
	I0507 19:55:44.595115    5068 command_runner.go:130] >   HolderIdentity:  multinode-600000-m03
	I0507 19:55:44.595115    5068 command_runner.go:130] >   AcquireTime:     <unset>
	I0507 19:55:44.595147    5068 command_runner.go:130] >   RenewTime:       Tue, 07 May 2024 19:51:16 +0000
	I0507 19:55:44.595147    5068 command_runner.go:130] > Conditions:
	I0507 19:55:44.595188    5068 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0507 19:55:44.595227    5068 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0507 19:55:44.595269    5068 command_runner.go:130] >   MemoryPressure   Unknown   Tue, 07 May 2024 19:50:31 +0000   Tue, 07 May 2024 19:51:58 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0507 19:55:44.595269    5068 command_runner.go:130] >   DiskPressure     Unknown   Tue, 07 May 2024 19:50:31 +0000   Tue, 07 May 2024 19:51:58 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0507 19:55:44.595269    5068 command_runner.go:130] >   PIDPressure      Unknown   Tue, 07 May 2024 19:50:31 +0000   Tue, 07 May 2024 19:51:58 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0507 19:55:44.595269    5068 command_runner.go:130] >   Ready            Unknown   Tue, 07 May 2024 19:50:31 +0000   Tue, 07 May 2024 19:51:58 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0507 19:55:44.595269    5068 command_runner.go:130] > Addresses:
	I0507 19:55:44.595269    5068 command_runner.go:130] >   InternalIP:  172.19.129.4
	I0507 19:55:44.595269    5068 command_runner.go:130] >   Hostname:    multinode-600000-m03
	I0507 19:55:44.595269    5068 command_runner.go:130] > Capacity:
	I0507 19:55:44.595269    5068 command_runner.go:130] >   cpu:                2
	I0507 19:55:44.595269    5068 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0507 19:55:44.595269    5068 command_runner.go:130] >   hugepages-2Mi:      0
	I0507 19:55:44.595269    5068 command_runner.go:130] >   memory:             2164264Ki
	I0507 19:55:44.595269    5068 command_runner.go:130] >   pods:               110
	I0507 19:55:44.595269    5068 command_runner.go:130] > Allocatable:
	I0507 19:55:44.595269    5068 command_runner.go:130] >   cpu:                2
	I0507 19:55:44.595269    5068 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0507 19:55:44.595269    5068 command_runner.go:130] >   hugepages-2Mi:      0
	I0507 19:55:44.595269    5068 command_runner.go:130] >   memory:             2164264Ki
	I0507 19:55:44.595269    5068 command_runner.go:130] >   pods:               110
	I0507 19:55:44.595269    5068 command_runner.go:130] > System Info:
	I0507 19:55:44.595269    5068 command_runner.go:130] >   Machine ID:                 380df77fae65410dba19d02344fea647
	I0507 19:55:44.595269    5068 command_runner.go:130] >   System UUID:                ed9d4a55-0088-004e-addb-543af9e02720
	I0507 19:55:44.595269    5068 command_runner.go:130] >   Boot ID:                    e0ec4add-64d0-47e3-9547-3261cfbddd3a
	I0507 19:55:44.595269    5068 command_runner.go:130] >   Kernel Version:             5.10.207
	I0507 19:55:44.595269    5068 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0507 19:55:44.595269    5068 command_runner.go:130] >   Operating System:           linux
	I0507 19:55:44.595269    5068 command_runner.go:130] >   Architecture:               amd64
	I0507 19:55:44.595269    5068 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0507 19:55:44.595269    5068 command_runner.go:130] >   Kubelet Version:            v1.30.0
	I0507 19:55:44.595269    5068 command_runner.go:130] >   Kube-Proxy Version:         v1.30.0
	I0507 19:55:44.595269    5068 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0507 19:55:44.595269    5068 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0507 19:55:44.595269    5068 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0507 19:55:44.595798    5068 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0507 19:55:44.595884    5068 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0507 19:55:44.595920    5068 command_runner.go:130] >   kube-system                 kindnet-dkxzt       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	I0507 19:55:44.595968    5068 command_runner.go:130] >   kube-system                 kube-proxy-pzn8q    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	I0507 19:55:44.595968    5068 command_runner.go:130] > Allocated resources:
	I0507 19:55:44.596043    5068 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0507 19:55:44.596074    5068 command_runner.go:130] >   Resource           Requests   Limits
	I0507 19:55:44.596074    5068 command_runner.go:130] >   --------           --------   ------
	I0507 19:55:44.596121    5068 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0507 19:55:44.596212    5068 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0507 19:55:44.596212    5068 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0507 19:55:44.596256    5068 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0507 19:55:44.596256    5068 command_runner.go:130] > Events:
	I0507 19:55:44.596348    5068 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0507 19:55:44.596348    5068 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0507 19:55:44.596390    5068 command_runner.go:130] >   Normal  Starting                 5m15s                  kube-proxy       
	I0507 19:55:44.596390    5068 command_runner.go:130] >   Normal  Starting                 14m                    kube-proxy       
	I0507 19:55:44.596435    5068 command_runner.go:130] >   Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	I0507 19:55:44.596483    5068 command_runner.go:130] >   Normal  NodeHasSufficientMemory  14m (x2 over 14m)      kubelet          Node multinode-600000-m03 status is now: NodeHasSufficientMemory
	I0507 19:55:44.596527    5068 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    14m (x2 over 14m)      kubelet          Node multinode-600000-m03 status is now: NodeHasNoDiskPressure
	I0507 19:55:44.596569    5068 command_runner.go:130] >   Normal  NodeHasSufficientPID     14m (x2 over 14m)      kubelet          Node multinode-600000-m03 status is now: NodeHasSufficientPID
	I0507 19:55:44.596613    5068 command_runner.go:130] >   Normal  NodeReady                14m                    kubelet          Node multinode-600000-m03 status is now: NodeReady
	I0507 19:55:44.596661    5068 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m19s (x2 over 5m19s)  kubelet          Node multinode-600000-m03 status is now: NodeHasSufficientMemory
	I0507 19:55:44.596704    5068 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m19s (x2 over 5m19s)  kubelet          Node multinode-600000-m03 status is now: NodeHasNoDiskPressure
	I0507 19:55:44.596745    5068 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m19s (x2 over 5m19s)  kubelet          Node multinode-600000-m03 status is now: NodeHasSufficientPID
	I0507 19:55:44.596791    5068 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m19s                  kubelet          Updated Node Allocatable limit across pods
	I0507 19:55:44.596837    5068 command_runner.go:130] >   Normal  RegisteredNode           5m16s                  node-controller  Node multinode-600000-m03 event: Registered Node multinode-600000-m03 in Controller
	I0507 19:55:44.596882    5068 command_runner.go:130] >   Normal  NodeReady                5m13s                  kubelet          Node multinode-600000-m03 status is now: NodeReady
	I0507 19:55:44.596882    5068 command_runner.go:130] >   Normal  NodeNotReady             3m46s                  node-controller  Node multinode-600000-m03 status is now: NodeNotReady
	I0507 19:55:44.596929    5068 command_runner.go:130] >   Normal  RegisteredNode           58s                    node-controller  Node multinode-600000-m03 event: Registered Node multinode-600000-m03 in Controller
	I0507 19:55:44.608861    5068 logs.go:123] Gathering logs for kube-apiserver [7c95e3addc4b] ...
	I0507 19:55:44.608861    5068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c95e3addc4b"
	I0507 19:55:44.635068    5068 command_runner.go:130] ! I0507 19:54:30.988770       1 options.go:221] external host was not specified, using 172.19.135.22
	I0507 19:55:44.635475    5068 command_runner.go:130] ! I0507 19:54:30.995893       1 server.go:148] Version: v1.30.0
	I0507 19:55:44.635517    5068 command_runner.go:130] ! I0507 19:54:30.996132       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0507 19:55:44.635964    5068 command_runner.go:130] ! I0507 19:54:31.800337       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0507 19:55:44.636082    5068 command_runner.go:130] ! I0507 19:54:31.800374       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0507 19:55:44.636120    5068 command_runner.go:130] ! I0507 19:54:31.801064       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0507 19:55:44.636120    5068 command_runner.go:130] ! I0507 19:54:31.801131       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0507 19:55:44.636120    5068 command_runner.go:130] ! I0507 19:54:31.801553       1 instance.go:299] Using reconciler: lease
	I0507 19:55:44.636120    5068 command_runner.go:130] ! I0507 19:54:32.352039       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0507 19:55:44.636227    5068 command_runner.go:130] ! W0507 19:54:32.352075       1 genericapiserver.go:733] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0507 19:55:44.636271    5068 command_runner.go:130] ! I0507 19:54:32.609708       1 handler.go:286] Adding GroupVersion  v1 to ResourceManager
	I0507 19:55:44.636271    5068 command_runner.go:130] ! I0507 19:54:32.610006       1 instance.go:696] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0507 19:55:44.636344    5068 command_runner.go:130] ! I0507 19:54:32.836522       1 instance.go:696] API group "storagemigration.k8s.io" is not enabled, skipping.
	I0507 19:55:44.636344    5068 command_runner.go:130] ! I0507 19:54:32.999148       1 instance.go:696] API group "resource.k8s.io" is not enabled, skipping.
	I0507 19:55:44.636397    5068 command_runner.go:130] ! I0507 19:54:33.030018       1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0507 19:55:44.636436    5068 command_runner.go:130] ! W0507 19:54:33.030136       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0507 19:55:44.636436    5068 command_runner.go:130] ! W0507 19:54:33.030146       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0507 19:55:44.636436    5068 command_runner.go:130] ! I0507 19:54:33.030562       1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0507 19:55:44.636436    5068 command_runner.go:130] ! W0507 19:54:33.030671       1 genericapiserver.go:733] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0507 19:55:44.636436    5068 command_runner.go:130] ! I0507 19:54:33.031835       1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
	I0507 19:55:44.636436    5068 command_runner.go:130] ! I0507 19:54:33.032596       1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
	I0507 19:55:44.636551    5068 command_runner.go:130] ! W0507 19:54:33.032785       1 genericapiserver.go:733] Skipping API autoscaling/v2beta1 because it has no resources.
	I0507 19:55:44.636551    5068 command_runner.go:130] ! W0507 19:54:33.032807       1 genericapiserver.go:733] Skipping API autoscaling/v2beta2 because it has no resources.
	I0507 19:55:44.636551    5068 command_runner.go:130] ! I0507 19:54:33.034337       1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
	I0507 19:55:44.636551    5068 command_runner.go:130] ! W0507 19:54:33.034455       1 genericapiserver.go:733] Skipping API batch/v1beta1 because it has no resources.
	I0507 19:55:44.636551    5068 command_runner.go:130] ! I0507 19:54:33.035255       1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0507 19:55:44.636551    5068 command_runner.go:130] ! W0507 19:54:33.035288       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0507 19:55:44.636551    5068 command_runner.go:130] ! W0507 19:54:33.035294       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0507 19:55:44.636551    5068 command_runner.go:130] ! I0507 19:54:33.035838       1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0507 19:55:44.636551    5068 command_runner.go:130] ! W0507 19:54:33.035918       1 genericapiserver.go:733] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0507 19:55:44.636551    5068 command_runner.go:130] ! W0507 19:54:33.035968       1 genericapiserver.go:733] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0507 19:55:44.636551    5068 command_runner.go:130] ! I0507 19:54:33.036453       1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0507 19:55:44.636551    5068 command_runner.go:130] ! I0507 19:54:33.038094       1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0507 19:55:44.636551    5068 command_runner.go:130] ! W0507 19:54:33.038196       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0507 19:55:44.636551    5068 command_runner.go:130] ! W0507 19:54:33.038204       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0507 19:55:44.636551    5068 command_runner.go:130] ! I0507 19:54:33.038675       1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0507 19:55:44.636551    5068 command_runner.go:130] ! W0507 19:54:33.038880       1 genericapiserver.go:733] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0507 19:55:44.636551    5068 command_runner.go:130] ! W0507 19:54:33.038891       1 genericapiserver.go:733] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0507 19:55:44.636551    5068 command_runner.go:130] ! I0507 19:54:33.039628       1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
	I0507 19:55:44.636551    5068 command_runner.go:130] ! W0507 19:54:33.039798       1 genericapiserver.go:733] Skipping API policy/v1beta1 because it has no resources.
	I0507 19:55:44.636551    5068 command_runner.go:130] ! I0507 19:54:33.041524       1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0507 19:55:44.636551    5068 command_runner.go:130] ! W0507 19:54:33.041621       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0507 19:55:44.636551    5068 command_runner.go:130] ! W0507 19:54:33.041630       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0507 19:55:44.636551    5068 command_runner.go:130] ! I0507 19:54:33.042180       1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0507 19:55:44.636551    5068 command_runner.go:130] ! W0507 19:54:33.042199       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0507 19:55:44.637090    5068 command_runner.go:130] ! W0507 19:54:33.042204       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0507 19:55:44.637135    5068 command_runner.go:130] ! I0507 19:54:33.044893       1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0507 19:55:44.637135    5068 command_runner.go:130] ! W0507 19:54:33.045016       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0507 19:55:44.637135    5068 command_runner.go:130] ! W0507 19:54:33.045025       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0507 19:55:44.637202    5068 command_runner.go:130] ! I0507 19:54:33.046333       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I0507 19:55:44.637242    5068 command_runner.go:130] ! I0507 19:54:33.047629       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I0507 19:55:44.637274    5068 command_runner.go:130] ! W0507 19:54:33.047767       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I0507 19:55:44.637274    5068 command_runner.go:130] ! W0507 19:54:33.047776       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0507 19:55:44.637274    5068 command_runner.go:130] ! I0507 19:54:33.052196       1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
	I0507 19:55:44.637274    5068 command_runner.go:130] ! W0507 19:54:33.052296       1 genericapiserver.go:733] Skipping API apps/v1beta2 because it has no resources.
	I0507 19:55:44.637274    5068 command_runner.go:130] ! W0507 19:54:33.052305       1 genericapiserver.go:733] Skipping API apps/v1beta1 because it has no resources.
	I0507 19:55:44.637274    5068 command_runner.go:130] ! I0507 19:54:33.054428       1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0507 19:55:44.637274    5068 command_runner.go:130] ! W0507 19:54:33.054530       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0507 19:55:44.637274    5068 command_runner.go:130] ! W0507 19:54:33.054538       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0507 19:55:44.637274    5068 command_runner.go:130] ! I0507 19:54:33.055154       1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0507 19:55:44.637274    5068 command_runner.go:130] ! W0507 19:54:33.055244       1 genericapiserver.go:733] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0507 19:55:44.637274    5068 command_runner.go:130] ! I0507 19:54:33.069859       1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0507 19:55:44.637274    5068 command_runner.go:130] ! W0507 19:54:33.070043       1 genericapiserver.go:733] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0507 19:55:44.637274    5068 command_runner.go:130] ! I0507 19:54:33.594507       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0507 19:55:44.637274    5068 command_runner.go:130] ! I0507 19:54:33.594682       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0507 19:55:44.637274    5068 command_runner.go:130] ! I0507 19:54:33.595540       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0507 19:55:44.637274    5068 command_runner.go:130] ! I0507 19:54:33.595924       1 secure_serving.go:213] Serving securely on [::]:8443
	I0507 19:55:44.637274    5068 command_runner.go:130] ! I0507 19:54:33.596143       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0507 19:55:44.637274    5068 command_runner.go:130] ! I0507 19:54:33.596346       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0507 19:55:44.637274    5068 command_runner.go:130] ! I0507 19:54:33.596374       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0507 19:55:44.637274    5068 command_runner.go:130] ! I0507 19:54:33.598256       1 available_controller.go:423] Starting AvailableConditionController
	I0507 19:55:44.637274    5068 command_runner.go:130] ! I0507 19:54:33.598413       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0507 19:55:44.637274    5068 command_runner.go:130] ! I0507 19:54:33.598667       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0507 19:55:44.637274    5068 command_runner.go:130] ! I0507 19:54:33.598950       1 controller.go:116] Starting legacy_token_tracking_controller
	I0507 19:55:44.637274    5068 command_runner.go:130] ! I0507 19:54:33.599041       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0507 19:55:44.637808    5068 command_runner.go:130] ! I0507 19:54:33.599147       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0507 19:55:44.637808    5068 command_runner.go:130] ! I0507 19:54:33.599437       1 apf_controller.go:374] Starting API Priority and Fairness config controller
	I0507 19:55:44.637857    5068 command_runner.go:130] ! I0507 19:54:33.600282       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0507 19:55:44.637857    5068 command_runner.go:130] ! I0507 19:54:33.600293       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0507 19:55:44.637913    5068 command_runner.go:130] ! I0507 19:54:33.600310       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0507 19:55:44.637913    5068 command_runner.go:130] ! I0507 19:54:33.600988       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0507 19:55:44.637962    5068 command_runner.go:130] ! I0507 19:54:33.601389       1 aggregator.go:163] waiting for initial CRD sync...
	I0507 19:55:44.637962    5068 command_runner.go:130] ! I0507 19:54:33.601406       1 controller.go:78] Starting OpenAPI AggregationController
	I0507 19:55:44.638036    5068 command_runner.go:130] ! I0507 19:54:33.601452       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0507 19:55:44.638036    5068 command_runner.go:130] ! I0507 19:54:33.601517       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0507 19:55:44.638120    5068 command_runner.go:130] ! I0507 19:54:33.603473       1 controller.go:139] Starting OpenAPI controller
	I0507 19:55:44.638120    5068 command_runner.go:130] ! I0507 19:54:33.603607       1 controller.go:87] Starting OpenAPI V3 controller
	I0507 19:55:44.638120    5068 command_runner.go:130] ! I0507 19:54:33.603676       1 naming_controller.go:291] Starting NamingConditionController
	I0507 19:55:44.638120    5068 command_runner.go:130] ! I0507 19:54:33.603772       1 establishing_controller.go:76] Starting EstablishingController
	I0507 19:55:44.638202    5068 command_runner.go:130] ! I0507 19:54:33.603950       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0507 19:55:44.638202    5068 command_runner.go:130] ! I0507 19:54:33.606447       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0507 19:55:44.638202    5068 command_runner.go:130] ! I0507 19:54:33.606495       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0507 19:55:44.638202    5068 command_runner.go:130] ! I0507 19:54:33.617581       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0507 19:55:44.638202    5068 command_runner.go:130] ! I0507 19:54:33.640887       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0507 19:55:44.638202    5068 command_runner.go:130] ! I0507 19:54:33.641139       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0507 19:55:44.638279    5068 command_runner.go:130] ! I0507 19:54:33.700222       1 shared_informer.go:320] Caches are synced for configmaps
	I0507 19:55:44.638319    5068 command_runner.go:130] ! I0507 19:54:33.702782       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0507 19:55:44.638359    5068 command_runner.go:130] ! I0507 19:54:33.702797       1 policy_source.go:224] refreshing policies
	I0507 19:55:44.638359    5068 command_runner.go:130] ! I0507 19:54:33.720688       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0507 19:55:44.638359    5068 command_runner.go:130] ! I0507 19:54:33.721334       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0507 19:55:44.638359    5068 command_runner.go:130] ! I0507 19:54:33.739066       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0507 19:55:44.638408    5068 command_runner.go:130] ! I0507 19:54:33.741686       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0507 19:55:44.638408    5068 command_runner.go:130] ! I0507 19:54:33.742272       1 aggregator.go:165] initial CRD sync complete...
	I0507 19:55:44.638445    5068 command_runner.go:130] ! I0507 19:54:33.742439       1 autoregister_controller.go:141] Starting autoregister controller
	I0507 19:55:44.638445    5068 command_runner.go:130] ! I0507 19:54:33.742581       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0507 19:55:44.638474    5068 command_runner.go:130] ! I0507 19:54:33.742709       1 cache.go:39] Caches are synced for autoregister controller
	I0507 19:55:44.638474    5068 command_runner.go:130] ! I0507 19:54:33.796399       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0507 19:55:44.638474    5068 command_runner.go:130] ! I0507 19:54:33.800122       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0507 19:55:44.638525    5068 command_runner.go:130] ! I0507 19:54:33.800332       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0507 19:55:44.638525    5068 command_runner.go:130] ! I0507 19:54:33.800503       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0507 19:55:44.638564    5068 command_runner.go:130] ! I0507 19:54:33.825705       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0507 19:55:44.638564    5068 command_runner.go:130] ! I0507 19:54:34.607945       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0507 19:55:44.638564    5068 command_runner.go:130] ! W0507 19:54:35.478370       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.19.135.22]
	I0507 19:55:44.638564    5068 command_runner.go:130] ! I0507 19:54:35.480604       1 controller.go:615] quota admission added evaluator for: endpoints
	I0507 19:55:44.638564    5068 command_runner.go:130] ! I0507 19:54:35.493313       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0507 19:55:44.638564    5068 command_runner.go:130] ! I0507 19:54:36.265995       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0507 19:55:44.638564    5068 command_runner.go:130] ! I0507 19:54:36.444774       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0507 19:55:44.638564    5068 command_runner.go:130] ! I0507 19:54:36.460585       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0507 19:55:44.638564    5068 command_runner.go:130] ! I0507 19:54:36.562263       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0507 19:55:44.638564    5068 command_runner.go:130] ! I0507 19:54:36.572917       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0507 19:55:44.645549    5068 logs.go:123] Gathering logs for kube-scheduler [7cefdac2050f] ...
	I0507 19:55:44.645549    5068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cefdac2050f"
	I0507 19:55:44.669932    5068 command_runner.go:130] ! I0507 19:33:39.572817       1 serving.go:380] Generated self-signed cert in-memory
	I0507 19:55:44.670029    5068 command_runner.go:130] ! W0507 19:33:41.035488       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0507 19:55:44.670072    5068 command_runner.go:130] ! W0507 19:33:41.035523       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0507 19:55:44.670072    5068 command_runner.go:130] ! W0507 19:33:41.035535       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0507 19:55:44.670122    5068 command_runner.go:130] ! W0507 19:33:41.035542       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0507 19:55:44.670165    5068 command_runner.go:130] ! I0507 19:33:41.100225       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0507 19:55:44.670165    5068 command_runner.go:130] ! I0507 19:33:41.104133       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0507 19:55:44.670165    5068 command_runner.go:130] ! I0507 19:33:41.108249       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0507 19:55:44.670213    5068 command_runner.go:130] ! I0507 19:33:41.108399       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0507 19:55:44.670213    5068 command_runner.go:130] ! I0507 19:33:41.108383       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0507 19:55:44.670256    5068 command_runner.go:130] ! I0507 19:33:41.108658       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0507 19:55:44.670256    5068 command_runner.go:130] ! W0507 19:33:41.115439       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0507 19:55:44.670329    5068 command_runner.go:130] ! E0507 19:33:41.115515       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0507 19:55:44.670375    5068 command_runner.go:130] ! W0507 19:33:41.115737       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0507 19:55:44.670412    5068 command_runner.go:130] ! E0507 19:33:41.115969       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0507 19:55:44.670499    5068 command_runner.go:130] ! W0507 19:33:41.115744       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0507 19:55:44.670552    5068 command_runner.go:130] ! E0507 19:33:41.116415       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0507 19:55:44.670552    5068 command_runner.go:130] ! W0507 19:33:41.116670       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0507 19:55:44.670650    5068 command_runner.go:130] ! E0507 19:33:41.117593       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0507 19:55:44.670702    5068 command_runner.go:130] ! W0507 19:33:41.119709       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0507 19:55:44.670749    5068 command_runner.go:130] ! E0507 19:33:41.120474       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0507 19:55:44.670800    5068 command_runner.go:130] ! W0507 19:33:41.119953       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0507 19:55:44.670845    5068 command_runner.go:130] ! E0507 19:33:41.121523       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0507 19:55:44.670898    5068 command_runner.go:130] ! W0507 19:33:41.120191       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0507 19:55:44.670944    5068 command_runner.go:130] ! W0507 19:33:41.120237       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0507 19:55:44.670995    5068 command_runner.go:130] ! W0507 19:33:41.120278       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0507 19:55:44.671042    5068 command_runner.go:130] ! W0507 19:33:41.120316       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0507 19:55:44.671093    5068 command_runner.go:130] ! W0507 19:33:41.120339       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0507 19:55:44.671139    5068 command_runner.go:130] ! W0507 19:33:41.120384       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0507 19:55:44.671191    5068 command_runner.go:130] ! W0507 19:33:41.120417       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0507 19:55:44.671244    5068 command_runner.go:130] ! W0507 19:33:41.120451       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0507 19:55:44.671244    5068 command_runner.go:130] ! E0507 19:33:41.122419       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0507 19:55:44.671295    5068 command_runner.go:130] ! W0507 19:33:41.123409       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0507 19:55:44.671393    5068 command_runner.go:130] ! E0507 19:33:41.123928       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0507 19:55:44.671439    5068 command_runner.go:130] ! E0507 19:33:41.123939       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0507 19:55:44.671490    5068 command_runner.go:130] ! E0507 19:33:41.123946       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0507 19:55:44.671587    5068 command_runner.go:130] ! E0507 19:33:41.123954       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0507 19:55:44.671633    5068 command_runner.go:130] ! E0507 19:33:41.123963       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0507 19:55:44.671633    5068 command_runner.go:130] ! E0507 19:33:41.124140       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0507 19:55:44.671730    5068 command_runner.go:130] ! E0507 19:33:41.125875       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0507 19:55:44.677511    5068 command_runner.go:130] ! E0507 19:33:41.125886       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0507 19:55:44.678184    5068 command_runner.go:130] ! W0507 19:33:41.948129       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0507 19:55:44.678184    5068 command_runner.go:130] ! E0507 19:33:41.948157       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0507 19:55:44.678184    5068 command_runner.go:130] ! W0507 19:33:41.994257       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0507 19:55:44.678184    5068 command_runner.go:130] ! E0507 19:33:41.994824       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0507 19:55:44.678184    5068 command_runner.go:130] ! W0507 19:33:42.109252       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0507 19:55:44.678184    5068 command_runner.go:130] ! E0507 19:33:42.109623       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0507 19:55:44.678184    5068 command_runner.go:130] ! W0507 19:33:42.156561       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0507 19:55:44.678184    5068 command_runner.go:130] ! E0507 19:33:42.157128       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0507 19:55:44.678184    5068 command_runner.go:130] ! W0507 19:33:42.162271       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0507 19:55:44.678714    5068 command_runner.go:130] ! E0507 19:33:42.162599       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0507 19:55:44.678875    5068 command_runner.go:130] ! W0507 19:33:42.229371       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0507 19:55:44.678954    5068 command_runner.go:130] ! E0507 19:33:42.229525       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0507 19:55:44.678954    5068 command_runner.go:130] ! W0507 19:33:42.264429       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0507 19:55:44.678954    5068 command_runner.go:130] ! E0507 19:33:42.264596       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0507 19:55:44.678954    5068 command_runner.go:130] ! W0507 19:33:42.284763       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0507 19:55:44.678954    5068 command_runner.go:130] ! E0507 19:33:42.284872       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0507 19:55:44.678954    5068 command_runner.go:130] ! W0507 19:33:42.338396       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0507 19:55:44.678954    5068 command_runner.go:130] ! E0507 19:33:42.338683       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0507 19:55:44.678954    5068 command_runner.go:130] ! W0507 19:33:42.356861       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0507 19:55:44.678954    5068 command_runner.go:130] ! E0507 19:33:42.356964       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0507 19:55:44.678954    5068 command_runner.go:130] ! W0507 19:33:42.435844       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0507 19:55:44.679480    5068 command_runner.go:130] ! E0507 19:33:42.436739       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0507 19:55:44.679480    5068 command_runner.go:130] ! W0507 19:33:42.446379       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0507 19:55:44.679593    5068 command_runner.go:130] ! E0507 19:33:42.446557       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0507 19:55:44.679665    5068 command_runner.go:130] ! W0507 19:33:42.489593       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0507 19:55:44.679739    5068 command_runner.go:130] ! E0507 19:33:42.489896       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0507 19:55:44.679816    5068 command_runner.go:130] ! W0507 19:33:42.647287       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0507 19:55:44.679896    5068 command_runner.go:130] ! E0507 19:33:42.648065       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0507 19:55:44.679969    5068 command_runner.go:130] ! W0507 19:33:42.657928       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0507 19:55:44.680021    5068 command_runner.go:130] ! E0507 19:33:42.658018       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0507 19:55:44.680086    5068 command_runner.go:130] ! I0507 19:33:43.909008       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0507 19:55:44.680159    5068 command_runner.go:130] ! E0507 19:52:16.714078       1 run.go:74] "command failed" err="finished without leader elect"
	I0507 19:55:44.693489    5068 logs.go:123] Gathering logs for kube-proxy [5255a972ff6c] ...
	I0507 19:55:44.693489    5068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5255a972ff6c"
	I0507 19:55:44.726512    5068 command_runner.go:130] ! I0507 19:54:35.575583       1 server_linux.go:69] "Using iptables proxy"
	I0507 19:55:44.727037    5068 command_runner.go:130] ! I0507 19:54:35.605564       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.19.135.22"]
	I0507 19:55:44.727037    5068 command_runner.go:130] ! I0507 19:54:35.819515       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0507 19:55:44.727037    5068 command_runner.go:130] ! I0507 19:54:35.819549       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0507 19:55:44.727037    5068 command_runner.go:130] ! I0507 19:54:35.819565       1 server_linux.go:165] "Using iptables Proxier"
	I0507 19:55:44.727037    5068 command_runner.go:130] ! I0507 19:54:35.837879       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0507 19:55:44.727037    5068 command_runner.go:130] ! I0507 19:54:35.838133       1 server.go:872] "Version info" version="v1.30.0"
	I0507 19:55:44.727037    5068 command_runner.go:130] ! I0507 19:54:35.838147       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0507 19:55:44.727037    5068 command_runner.go:130] ! I0507 19:54:35.845888       1 config.go:192] "Starting service config controller"
	I0507 19:55:44.727037    5068 command_runner.go:130] ! I0507 19:54:35.848183       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0507 19:55:44.727037    5068 command_runner.go:130] ! I0507 19:54:35.848226       1 config.go:319] "Starting node config controller"
	I0507 19:55:44.727037    5068 command_runner.go:130] ! I0507 19:54:35.848406       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0507 19:55:44.727037    5068 command_runner.go:130] ! I0507 19:54:35.849079       1 config.go:101] "Starting endpoint slice config controller"
	I0507 19:55:44.727244    5068 command_runner.go:130] ! I0507 19:54:35.849088       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0507 19:55:44.727365    5068 command_runner.go:130] ! I0507 19:54:35.954590       1 shared_informer.go:320] Caches are synced for node config
	I0507 19:55:44.727365    5068 command_runner.go:130] ! I0507 19:54:35.954640       1 shared_informer.go:320] Caches are synced for service config
	I0507 19:55:44.727365    5068 command_runner.go:130] ! I0507 19:54:35.954677       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0507 19:55:44.732052    5068 logs.go:123] Gathering logs for kube-proxy [aa9692c1fbd3] ...
	I0507 19:55:44.732052    5068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa9692c1fbd3"
	I0507 19:55:44.754878    5068 command_runner.go:130] ! I0507 19:33:59.788332       1 server_linux.go:69] "Using iptables proxy"
	I0507 19:55:44.754878    5068 command_runner.go:130] ! I0507 19:33:59.819474       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.19.143.74"]
	I0507 19:55:44.754878    5068 command_runner.go:130] ! I0507 19:33:59.872130       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0507 19:55:44.755630    5068 command_runner.go:130] ! I0507 19:33:59.872292       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0507 19:55:44.755630    5068 command_runner.go:130] ! I0507 19:33:59.872320       1 server_linux.go:165] "Using iptables Proxier"
	I0507 19:55:44.755630    5068 command_runner.go:130] ! I0507 19:33:59.878610       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0507 19:55:44.755630    5068 command_runner.go:130] ! I0507 19:33:59.879634       1 server.go:872] "Version info" version="v1.30.0"
	I0507 19:55:44.755630    5068 command_runner.go:130] ! I0507 19:33:59.879774       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0507 19:55:44.755630    5068 command_runner.go:130] ! I0507 19:33:59.883100       1 config.go:192] "Starting service config controller"
	I0507 19:55:44.755630    5068 command_runner.go:130] ! I0507 19:33:59.884238       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0507 19:55:44.755630    5068 command_runner.go:130] ! I0507 19:33:59.884310       1 config.go:101] "Starting endpoint slice config controller"
	I0507 19:55:44.755891    5068 command_runner.go:130] ! I0507 19:33:59.884544       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0507 19:55:44.755891    5068 command_runner.go:130] ! I0507 19:33:59.886801       1 config.go:319] "Starting node config controller"
	I0507 19:55:44.755891    5068 command_runner.go:130] ! I0507 19:33:59.888528       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0507 19:55:44.755891    5068 command_runner.go:130] ! I0507 19:33:59.985346       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0507 19:55:44.755949    5068 command_runner.go:130] ! I0507 19:33:59.985458       1 shared_informer.go:320] Caches are synced for service config
	I0507 19:55:44.755949    5068 command_runner.go:130] ! I0507 19:33:59.988897       1 shared_informer.go:320] Caches are synced for node config
	I0507 19:55:44.757915    5068 logs.go:123] Gathering logs for kube-controller-manager [3067f16e2e38] ...
	I0507 19:55:44.758035    5068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3067f16e2e38"
	I0507 19:55:44.788053    5068 command_runner.go:130] ! I0507 19:33:39.646652       1 serving.go:380] Generated self-signed cert in-memory
	I0507 19:55:44.788053    5068 command_runner.go:130] ! I0507 19:33:40.017908       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0507 19:55:44.788053    5068 command_runner.go:130] ! I0507 19:33:40.018051       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0507 19:55:44.788053    5068 command_runner.go:130] ! I0507 19:33:40.019973       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0507 19:55:44.788053    5068 command_runner.go:130] ! I0507 19:33:40.020228       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0507 19:55:44.788053    5068 command_runner.go:130] ! I0507 19:33:40.023071       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0507 19:55:44.788053    5068 command_runner.go:130] ! I0507 19:33:40.024192       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0507 19:55:44.788053    5068 command_runner.go:130] ! I0507 19:33:44.035484       1 controllermanager.go:759] "Started controller" controller="serviceaccount-token-controller"
	I0507 19:55:44.788053    5068 command_runner.go:130] ! I0507 19:33:44.035669       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0507 19:55:44.788053    5068 command_runner.go:130] ! I0507 19:33:44.062270       1 controllermanager.go:759] "Started controller" controller="pod-garbage-collector-controller"
	I0507 19:55:44.788053    5068 command_runner.go:130] ! I0507 19:33:44.062488       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0507 19:55:44.788053    5068 command_runner.go:130] ! I0507 19:33:44.062501       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0507 19:55:44.788053    5068 command_runner.go:130] ! I0507 19:33:44.082052       1 controllermanager.go:759] "Started controller" controller="serviceaccount-controller"
	I0507 19:55:44.788053    5068 command_runner.go:130] ! I0507 19:33:44.082328       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0507 19:55:44.788053    5068 command_runner.go:130] ! I0507 19:33:44.082342       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0507 19:55:44.788053    5068 command_runner.go:130] ! I0507 19:33:44.097853       1 controllermanager.go:759] "Started controller" controller="daemonset-controller"
	I0507 19:55:44.788053    5068 command_runner.go:130] ! I0507 19:33:44.100760       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0507 19:55:44.788053    5068 command_runner.go:130] ! I0507 19:33:44.101645       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0507 19:55:44.788053    5068 command_runner.go:130] ! I0507 19:33:44.135768       1 shared_informer.go:320] Caches are synced for tokens
	I0507 19:55:44.788053    5068 command_runner.go:130] ! I0507 19:33:44.143316       1 controllermanager.go:759] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0507 19:55:44.788053    5068 command_runner.go:130] ! I0507 19:33:44.143654       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0507 19:55:44.788053    5068 command_runner.go:130] ! I0507 19:33:44.143854       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0507 19:55:44.788053    5068 command_runner.go:130] ! I0507 19:33:44.156569       1 controllermanager.go:759] "Started controller" controller="statefulset-controller"
	I0507 19:55:44.788053    5068 command_runner.go:130] ! I0507 19:33:44.156806       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0507 19:55:44.788053    5068 command_runner.go:130] ! I0507 19:33:44.156821       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0507 19:55:44.788053    5068 command_runner.go:130] ! I0507 19:33:44.193774       1 controllermanager.go:759] "Started controller" controller="bootstrap-signer-controller"
	I0507 19:55:44.788053    5068 command_runner.go:130] ! I0507 19:33:44.194041       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0507 19:55:44.788053    5068 command_runner.go:130] ! I0507 19:33:44.224957       1 controllermanager.go:759] "Started controller" controller="endpointslice-mirroring-controller"
	I0507 19:55:44.788053    5068 command_runner.go:130] ! I0507 19:33:44.225326       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0507 19:55:44.788053    5068 command_runner.go:130] ! I0507 19:33:44.225340       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0507 19:55:44.788053    5068 command_runner.go:130] ! I0507 19:33:44.264579       1 controllermanager.go:759] "Started controller" controller="replicationcontroller-controller"
	I0507 19:55:44.788053    5068 command_runner.go:130] ! I0507 19:33:44.265097       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0507 19:55:44.788053    5068 command_runner.go:130] ! I0507 19:33:44.265116       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0507 19:55:44.789051    5068 command_runner.go:130] ! I0507 19:33:44.287038       1 controllermanager.go:759] "Started controller" controller="persistentvolume-binder-controller"
	I0507 19:55:44.789051    5068 command_runner.go:130] ! I0507 19:33:44.287393       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0507 19:55:44.789051    5068 command_runner.go:130] ! I0507 19:33:44.287436       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0507 19:55:44.789051    5068 command_runner.go:130] ! I0507 19:33:44.356902       1 controllermanager.go:759] "Started controller" controller="ttl-controller"
	I0507 19:55:44.789051    5068 command_runner.go:130] ! I0507 19:33:44.357443       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0507 19:55:44.789051    5068 command_runner.go:130] ! I0507 19:33:44.357459       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0507 19:55:44.789051    5068 command_runner.go:130] ! E0507 19:33:44.380020       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0507 19:55:44.789051    5068 command_runner.go:130] ! I0507 19:33:44.380113       1 controllermanager.go:737] "Warning: skipping controller" controller="service-lb-controller"
	I0507 19:55:44.789051    5068 command_runner.go:130] ! I0507 19:33:44.504313       1 controllermanager.go:759] "Started controller" controller="clusterrole-aggregation-controller"
	I0507 19:55:44.789051    5068 command_runner.go:130] ! I0507 19:33:44.504889       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0507 19:55:44.789051    5068 command_runner.go:130] ! I0507 19:33:44.504939       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0507 19:55:44.789051    5068 command_runner.go:130] ! I0507 19:33:44.642194       1 controllermanager.go:759] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0507 19:55:44.789051    5068 command_runner.go:130] ! I0507 19:33:44.642248       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0507 19:55:44.789051    5068 command_runner.go:130] ! I0507 19:33:44.642259       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0507 19:55:44.789051    5068 command_runner.go:130] ! I0507 19:33:44.952758       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0507 19:55:44.789051    5068 command_runner.go:130] ! I0507 19:33:44.952894       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0507 19:55:44.789051    5068 command_runner.go:130] ! I0507 19:33:44.952916       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0507 19:55:44.789051    5068 command_runner.go:130] ! I0507 19:33:44.952951       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0507 19:55:44.789051    5068 command_runner.go:130] ! I0507 19:33:44.952971       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0507 19:55:44.789051    5068 command_runner.go:130] ! I0507 19:33:44.953093       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0507 19:55:44.789051    5068 command_runner.go:130] ! I0507 19:33:44.953113       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0507 19:55:44.789051    5068 command_runner.go:130] ! I0507 19:33:44.953131       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0507 19:55:44.789051    5068 command_runner.go:130] ! I0507 19:33:44.953150       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0507 19:55:44.789051    5068 command_runner.go:130] ! I0507 19:33:44.953173       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0507 19:55:44.789051    5068 command_runner.go:130] ! I0507 19:33:44.953207       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0507 19:55:44.789051    5068 command_runner.go:130] ! I0507 19:33:44.953385       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0507 19:55:44.789051    5068 command_runner.go:130] ! I0507 19:33:44.953527       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0507 19:55:44.789051    5068 command_runner.go:130] ! I0507 19:33:44.953695       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0507 19:55:44.789051    5068 command_runner.go:130] ! I0507 19:33:44.953874       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0507 19:55:44.789051    5068 command_runner.go:130] ! I0507 19:33:44.954040       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0507 19:55:44.789051    5068 command_runner.go:130] ! I0507 19:33:44.954064       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0507 19:55:44.789051    5068 command_runner.go:130] ! I0507 19:33:44.954206       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0507 19:55:44.789051    5068 command_runner.go:130] ! I0507 19:33:44.954278       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0507 19:55:44.789051    5068 command_runner.go:130] ! I0507 19:33:44.954308       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0507 19:55:44.789051    5068 command_runner.go:130] ! I0507 19:33:44.954374       1 controllermanager.go:759] "Started controller" controller="resourcequota-controller"
	I0507 19:55:44.789051    5068 command_runner.go:130] ! I0507 19:33:44.954592       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0507 19:55:44.789051    5068 command_runner.go:130] ! I0507 19:33:44.954813       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0507 19:55:44.789051    5068 command_runner.go:130] ! I0507 19:33:44.954968       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0507 19:55:44.789051    5068 command_runner.go:130] ! I0507 19:33:44.959507       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0507 19:55:44.789051    5068 command_runner.go:130] ! I0507 19:33:45.092915       1 controllermanager.go:759] "Started controller" controller="deployment-controller"
	I0507 19:55:44.789051    5068 command_runner.go:130] ! I0507 19:33:45.092938       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0507 19:55:44.789051    5068 command_runner.go:130] ! I0507 19:33:45.092974       1 controllermanager.go:737] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0507 19:55:44.789051    5068 command_runner.go:130] ! I0507 19:33:45.093078       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0507 19:55:44.789051    5068 command_runner.go:130] ! I0507 19:33:45.093089       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0507 19:55:44.789051    5068 command_runner.go:130] ! I0507 19:33:45.248481       1 controllermanager.go:759] "Started controller" controller="job-controller"
	I0507 19:55:44.789051    5068 command_runner.go:130] ! I0507 19:33:45.248590       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0507 19:55:44.789051    5068 command_runner.go:130] ! I0507 19:33:45.248600       1 shared_informer.go:313] Waiting for caches to sync for job
	I0507 19:55:44.789051    5068 command_runner.go:130] ! I0507 19:33:45.403516       1 controllermanager.go:759] "Started controller" controller="persistentvolume-protection-controller"
	I0507 19:55:44.789051    5068 command_runner.go:130] ! I0507 19:33:45.403864       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0507 19:55:44.789051    5068 command_runner.go:130] ! I0507 19:33:45.404124       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0507 19:55:44.789051    5068 command_runner.go:130] ! I0507 19:33:45.547079       1 controllermanager.go:759] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0507 19:55:44.789051    5068 command_runner.go:130] ! I0507 19:33:45.547101       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0507 19:55:44.789051    5068 command_runner.go:130] ! I0507 19:33:45.547218       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0507 19:55:44.789051    5068 command_runner.go:130] ! I0507 19:33:45.547228       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0507 19:55:44.789051    5068 command_runner.go:130] ! I0507 19:33:45.695293       1 controllermanager.go:759] "Started controller" controller="cronjob-controller"
	I0507 19:55:44.789051    5068 command_runner.go:130] ! I0507 19:33:45.695376       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0507 19:55:44.789051    5068 command_runner.go:130] ! I0507 19:33:45.695385       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0507 19:55:44.789051    5068 command_runner.go:130] ! I0507 19:33:45.842519       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0507 19:55:44.789051    5068 command_runner.go:130] ! I0507 19:33:45.843201       1 controllermanager.go:759] "Started controller" controller="node-lifecycle-controller"
	I0507 19:55:44.790046    5068 command_runner.go:130] ! I0507 19:33:45.843464       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0507 19:55:44.790046    5068 command_runner.go:130] ! I0507 19:33:45.843612       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0507 19:55:44.790046    5068 command_runner.go:130] ! I0507 19:33:45.843670       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0507 19:55:44.790046    5068 command_runner.go:130] ! I0507 19:33:45.994121       1 controllermanager.go:759] "Started controller" controller="persistentvolume-expander-controller"
	I0507 19:55:44.790046    5068 command_runner.go:130] ! I0507 19:33:45.994195       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0507 19:55:44.790046    5068 command_runner.go:130] ! I0507 19:33:45.994559       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0507 19:55:44.790046    5068 command_runner.go:130] ! I0507 19:33:46.142670       1 controllermanager.go:759] "Started controller" controller="ephemeral-volume-controller"
	I0507 19:55:44.790046    5068 command_runner.go:130] ! I0507 19:33:46.142767       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0507 19:55:44.790046    5068 command_runner.go:130] ! I0507 19:33:46.142777       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0507 19:55:44.790046    5068 command_runner.go:130] ! I0507 19:33:46.292842       1 controllermanager.go:759] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0507 19:55:44.790046    5068 command_runner.go:130] ! I0507 19:33:46.292937       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0507 19:55:44.790046    5068 command_runner.go:130] ! I0507 19:33:46.292979       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0507 19:55:44.790046    5068 command_runner.go:130] ! I0507 19:33:46.293532       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0507 19:55:44.790046    5068 command_runner.go:130] ! I0507 19:33:46.443522       1 controllermanager.go:759] "Started controller" controller="endpoints-controller"
	I0507 19:55:44.790046    5068 command_runner.go:130] ! I0507 19:33:46.443783       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0507 19:55:44.790046    5068 command_runner.go:130] ! I0507 19:33:46.443796       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0507 19:55:44.790046    5068 command_runner.go:130] ! I0507 19:33:46.639478       1 controllermanager.go:759] "Started controller" controller="disruption-controller"
	I0507 19:55:44.790046    5068 command_runner.go:130] ! I0507 19:33:46.639695       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0507 19:55:44.790046    5068 command_runner.go:130] ! I0507 19:33:46.640237       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0507 19:55:44.790046    5068 command_runner.go:130] ! I0507 19:33:46.640384       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0507 19:55:44.790046    5068 command_runner.go:130] ! I0507 19:33:46.802195       1 controllermanager.go:759] "Started controller" controller="ttl-after-finished-controller"
	I0507 19:55:44.790046    5068 command_runner.go:130] ! I0507 19:33:46.802321       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0507 19:55:44.790046    5068 command_runner.go:130] ! I0507 19:33:46.802333       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0507 19:55:44.790046    5068 command_runner.go:130] ! I0507 19:33:46.839302       1 controllermanager.go:759] "Started controller" controller="taint-eviction-controller"
	I0507 19:55:44.790046    5068 command_runner.go:130] ! I0507 19:33:46.839419       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0507 19:55:44.790046    5068 command_runner.go:130] ! I0507 19:33:46.839439       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0507 19:55:44.790046    5068 command_runner.go:130] ! I0507 19:33:46.839547       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0507 19:55:44.790046    5068 command_runner.go:130] ! I0507 19:33:46.995880       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0507 19:55:44.790046    5068 command_runner.go:130] ! I0507 19:33:46.996105       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0507 19:55:44.790046    5068 command_runner.go:130] ! I0507 19:33:46.996124       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0507 19:55:44.790046    5068 command_runner.go:130] ! I0507 19:33:46.996192       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0507 19:55:44.790046    5068 command_runner.go:130] ! I0507 19:33:46.996213       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0507 19:55:44.790046    5068 command_runner.go:130] ! I0507 19:33:46.996264       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0507 19:55:44.790046    5068 command_runner.go:130] ! I0507 19:33:46.996515       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0507 19:55:44.790046    5068 command_runner.go:130] ! I0507 19:33:46.997757       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0507 19:55:44.790046    5068 command_runner.go:130] ! I0507 19:33:46.997789       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0507 19:55:44.790046    5068 command_runner.go:130] ! I0507 19:33:46.998232       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0507 19:55:44.790046    5068 command_runner.go:130] ! I0507 19:33:46.998256       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0507 19:55:44.790046    5068 command_runner.go:130] ! I0507 19:33:46.998461       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0507 19:55:44.790046    5068 command_runner.go:130] ! I0507 19:33:46.998581       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0507 19:55:44.790046    5068 command_runner.go:130] ! I0507 19:33:47.144659       1 controllermanager.go:759] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0507 19:55:44.790046    5068 command_runner.go:130] ! I0507 19:33:47.144787       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0507 19:55:44.790046    5068 command_runner.go:130] ! I0507 19:33:47.144840       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0507 19:55:44.790046    5068 command_runner.go:130] ! I0507 19:33:47.188132       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0507 19:55:44.790046    5068 command_runner.go:130] ! I0507 19:33:47.188178       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0507 19:55:44.790046    5068 command_runner.go:130] ! I0507 19:33:47.188191       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0507 19:55:44.790046    5068 command_runner.go:130] ! I0507 19:33:47.238083       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0507 19:55:44.790046    5068 command_runner.go:130] ! I0507 19:33:47.238123       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0507 19:55:44.790046    5068 command_runner.go:130] ! I0507 19:33:47.394585       1 controllermanager.go:759] "Started controller" controller="token-cleaner-controller"
	I0507 19:55:44.790046    5068 command_runner.go:130] ! I0507 19:33:47.394777       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0507 19:55:44.790046    5068 command_runner.go:130] ! I0507 19:33:47.394803       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0507 19:55:44.790046    5068 command_runner.go:130] ! I0507 19:33:47.394838       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0507 19:55:44.790046    5068 command_runner.go:130] ! I0507 19:33:57.452785       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0507 19:55:44.790046    5068 command_runner.go:130] ! I0507 19:33:57.452897       1 controllermanager.go:759] "Started controller" controller="node-ipam-controller"
	I0507 19:55:44.790046    5068 command_runner.go:130] ! I0507 19:33:57.453626       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0507 19:55:44.790046    5068 command_runner.go:130] ! I0507 19:33:57.453826       1 shared_informer.go:313] Waiting for caches to sync for node
	I0507 19:55:44.790046    5068 command_runner.go:130] ! I0507 19:33:57.483145       1 controllermanager.go:759] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0507 19:55:44.790046    5068 command_runner.go:130] ! I0507 19:33:57.483422       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0507 19:55:44.790046    5068 command_runner.go:130] ! I0507 19:33:57.493863       1 controllermanager.go:759] "Started controller" controller="endpointslice-controller"
	I0507 19:55:44.790046    5068 command_runner.go:130] ! I0507 19:33:57.494296       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0507 19:55:44.790046    5068 command_runner.go:130] ! I0507 19:33:57.494585       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0507 19:55:44.790046    5068 command_runner.go:130] ! I0507 19:33:57.506181       1 controllermanager.go:759] "Started controller" controller="replicaset-controller"
	I0507 19:55:44.791048    5068 command_runner.go:130] ! I0507 19:33:57.506211       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0507 19:55:44.791048    5068 command_runner.go:130] ! I0507 19:33:57.506219       1 controllermanager.go:737] "Warning: skipping controller" controller="node-route-controller"
	I0507 19:55:44.791048    5068 command_runner.go:130] ! I0507 19:33:57.506448       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0507 19:55:44.791048    5068 command_runner.go:130] ! I0507 19:33:57.506471       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0507 19:55:44.791048    5068 command_runner.go:130] ! E0507 19:33:57.508667       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0507 19:55:44.791048    5068 command_runner.go:130] ! I0507 19:33:57.508863       1 controllermanager.go:737] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0507 19:55:44.791048    5068 command_runner.go:130] ! I0507 19:33:57.536071       1 controllermanager.go:759] "Started controller" controller="namespace-controller"
	I0507 19:55:44.791048    5068 command_runner.go:130] ! I0507 19:33:57.536238       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0507 19:55:44.791048    5068 command_runner.go:130] ! I0507 19:33:57.536958       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0507 19:55:44.791048    5068 command_runner.go:130] ! I0507 19:33:57.552316       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0507 19:55:44.791048    5068 command_runner.go:130] ! I0507 19:33:57.552368       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0507 19:55:44.791048    5068 command_runner.go:130] ! I0507 19:33:57.552583       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0507 19:55:44.791048    5068 command_runner.go:130] ! I0507 19:33:57.552830       1 controllermanager.go:759] "Started controller" controller="garbage-collector-controller"
	I0507 19:55:44.791048    5068 command_runner.go:130] ! I0507 19:33:57.602799       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0507 19:55:44.791048    5068 command_runner.go:130] ! I0507 19:33:57.604255       1 shared_informer.go:320] Caches are synced for expand
	I0507 19:55:44.791048    5068 command_runner.go:130] ! I0507 19:33:57.604567       1 shared_informer.go:320] Caches are synced for cronjob
	I0507 19:55:44.791048    5068 command_runner.go:130] ! I0507 19:33:57.604710       1 shared_informer.go:320] Caches are synced for PV protection
	I0507 19:55:44.791048    5068 command_runner.go:130] ! I0507 19:33:57.616713       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-600000\" does not exist"
	I0507 19:55:44.791048    5068 command_runner.go:130] ! I0507 19:33:57.620217       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0507 19:55:44.791048    5068 command_runner.go:130] ! I0507 19:33:57.625534       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0507 19:55:44.791048    5068 command_runner.go:130] ! I0507 19:33:57.637418       1 shared_informer.go:320] Caches are synced for namespace
	I0507 19:55:44.791048    5068 command_runner.go:130] ! I0507 19:33:57.640979       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0507 19:55:44.791048    5068 command_runner.go:130] ! I0507 19:33:57.643690       1 shared_informer.go:320] Caches are synced for ephemeral
	I0507 19:55:44.791048    5068 command_runner.go:130] ! I0507 19:33:57.643962       1 shared_informer.go:320] Caches are synced for crt configmap
	I0507 19:55:44.791048    5068 command_runner.go:130] ! I0507 19:33:57.643944       1 shared_informer.go:320] Caches are synced for endpoint
	I0507 19:55:44.791048    5068 command_runner.go:130] ! I0507 19:33:57.645645       1 shared_informer.go:320] Caches are synced for PVC protection
	I0507 19:55:44.791048    5068 command_runner.go:130] ! I0507 19:33:57.650051       1 shared_informer.go:320] Caches are synced for job
	I0507 19:55:44.791048    5068 command_runner.go:130] ! I0507 19:33:57.654615       1 shared_informer.go:320] Caches are synced for node
	I0507 19:55:44.791048    5068 command_runner.go:130] ! I0507 19:33:57.654828       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0507 19:55:44.791048    5068 command_runner.go:130] ! I0507 19:33:57.654976       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0507 19:55:44.791048    5068 command_runner.go:130] ! I0507 19:33:57.658548       1 shared_informer.go:320] Caches are synced for stateful set
	I0507 19:55:44.791048    5068 command_runner.go:130] ! I0507 19:33:57.658557       1 shared_informer.go:320] Caches are synced for TTL
	I0507 19:55:44.791048    5068 command_runner.go:130] ! I0507 19:33:57.658578       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0507 19:55:44.791048    5068 command_runner.go:130] ! I0507 19:33:57.660814       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0507 19:55:44.791048    5068 command_runner.go:130] ! I0507 19:33:57.662570       1 shared_informer.go:320] Caches are synced for GC
	I0507 19:55:44.791048    5068 command_runner.go:130] ! I0507 19:33:57.666627       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0507 19:55:44.791048    5068 command_runner.go:130] ! I0507 19:33:57.682592       1 shared_informer.go:320] Caches are synced for service account
	I0507 19:55:44.791048    5068 command_runner.go:130] ! I0507 19:33:57.683797       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0507 19:55:44.791048    5068 command_runner.go:130] ! I0507 19:33:57.686866       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-600000" podCIDRs=["10.244.0.0/24"]
	I0507 19:55:44.791048    5068 command_runner.go:130] ! I0507 19:33:57.688271       1 shared_informer.go:320] Caches are synced for persistent volume
	I0507 19:55:44.791048    5068 command_runner.go:130] ! I0507 19:33:57.688450       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0507 19:55:44.791048    5068 command_runner.go:130] ! I0507 19:33:57.693833       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0507 19:55:44.791048    5068 command_runner.go:130] ! I0507 19:33:57.695065       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0507 19:55:44.791048    5068 command_runner.go:130] ! I0507 19:33:57.696405       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0507 19:55:44.791048    5068 command_runner.go:130] ! I0507 19:33:57.696588       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0507 19:55:44.791048    5068 command_runner.go:130] ! I0507 19:33:57.699644       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0507 19:55:44.791048    5068 command_runner.go:130] ! I0507 19:33:57.700059       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0507 19:55:44.791048    5068 command_runner.go:130] ! I0507 19:33:57.700324       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0507 19:55:44.791048    5068 command_runner.go:130] ! I0507 19:33:57.703629       1 shared_informer.go:320] Caches are synced for daemon sets
	I0507 19:55:44.791048    5068 command_runner.go:130] ! I0507 19:33:57.710906       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0507 19:55:44.791048    5068 command_runner.go:130] ! I0507 19:33:57.744541       1 shared_informer.go:320] Caches are synced for HPA
	I0507 19:55:44.791048    5068 command_runner.go:130] ! I0507 19:33:57.744580       1 shared_informer.go:320] Caches are synced for taint
	I0507 19:55:44.791048    5068 command_runner.go:130] ! I0507 19:33:57.744652       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0507 19:55:44.791048    5068 command_runner.go:130] ! I0507 19:33:57.744737       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-600000"
	I0507 19:55:44.791048    5068 command_runner.go:130] ! I0507 19:33:57.744768       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0507 19:55:44.791048    5068 command_runner.go:130] ! I0507 19:33:57.764904       1 shared_informer.go:320] Caches are synced for resource quota
	I0507 19:55:44.791048    5068 command_runner.go:130] ! I0507 19:33:57.793156       1 shared_informer.go:320] Caches are synced for deployment
	I0507 19:55:44.791048    5068 command_runner.go:130] ! I0507 19:33:57.806522       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0507 19:55:44.791048    5068 command_runner.go:130] ! I0507 19:33:57.841338       1 shared_informer.go:320] Caches are synced for disruption
	I0507 19:55:44.791048    5068 command_runner.go:130] ! I0507 19:33:57.848178       1 shared_informer.go:320] Caches are synced for attach detach
	I0507 19:55:44.791048    5068 command_runner.go:130] ! I0507 19:33:57.857076       1 shared_informer.go:320] Caches are synced for resource quota
	I0507 19:55:44.791048    5068 command_runner.go:130] ! I0507 19:33:58.320735       1 shared_informer.go:320] Caches are synced for garbage collector
	I0507 19:55:44.792383    5068 command_runner.go:130] ! I0507 19:33:58.353360       1 shared_informer.go:320] Caches are synced for garbage collector
	I0507 19:55:44.792383    5068 command_runner.go:130] ! I0507 19:33:58.353634       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0507 19:55:44.792466    5068 command_runner.go:130] ! I0507 19:33:58.648491       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="254.239192ms"
	I0507 19:55:44.792526    5068 command_runner.go:130] ! I0507 19:33:58.768889       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="120.227252ms"
	I0507 19:55:44.792526    5068 command_runner.go:130] ! I0507 19:33:58.768980       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="57.703µs"
	I0507 19:55:44.792526    5068 command_runner.go:130] ! I0507 19:33:59.385629       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="74.4593ms"
	I0507 19:55:44.792526    5068 command_runner.go:130] ! I0507 19:33:59.400563       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="14.850657ms"
	I0507 19:55:44.792526    5068 command_runner.go:130] ! I0507 19:33:59.442803       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="42.020809ms"
	I0507 19:55:44.792526    5068 command_runner.go:130] ! I0507 19:33:59.442937       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="66.204µs"
	I0507 19:55:44.792526    5068 command_runner.go:130] ! I0507 19:34:10.730717       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="75.405µs"
	I0507 19:55:44.792526    5068 command_runner.go:130] ! I0507 19:34:10.778543       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="100.807µs"
	I0507 19:55:44.792526    5068 command_runner.go:130] ! I0507 19:34:12.746728       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0507 19:55:44.792526    5068 command_runner.go:130] ! I0507 19:34:12.843910       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="71.905µs"
	I0507 19:55:44.792526    5068 command_runner.go:130] ! I0507 19:34:12.916087       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="21.128233ms"
	I0507 19:55:44.792526    5068 command_runner.go:130] ! I0507 19:34:12.920189       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="131.008µs"
	I0507 19:55:44.792526    5068 command_runner.go:130] ! I0507 19:36:39.748714       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-600000-m02\" does not exist"
	I0507 19:55:44.792526    5068 command_runner.go:130] ! I0507 19:36:39.768095       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-600000-m02" podCIDRs=["10.244.1.0/24"]
	I0507 19:55:44.792526    5068 command_runner.go:130] ! I0507 19:36:42.771386       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-600000-m02"
	I0507 19:55:44.792526    5068 command_runner.go:130] ! I0507 19:36:59.833069       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-600000-m02"
	I0507 19:55:44.792526    5068 command_runner.go:130] ! I0507 19:37:23.261574       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="80.822997ms"
	I0507 19:55:44.792526    5068 command_runner.go:130] ! I0507 19:37:23.275925       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.242181ms"
	I0507 19:55:44.792526    5068 command_runner.go:130] ! I0507 19:37:23.277411       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.303µs"
	I0507 19:55:44.792526    5068 command_runner.go:130] ! I0507 19:37:25.468822       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.984518ms"
	I0507 19:55:44.792526    5068 command_runner.go:130] ! I0507 19:37:25.471412       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="2.381856ms"
	I0507 19:55:44.792526    5068 command_runner.go:130] ! I0507 19:37:26.028543       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.755438ms"
	I0507 19:55:44.792526    5068 command_runner.go:130] ! I0507 19:37:26.029180       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="91.706µs"
	I0507 19:55:44.792526    5068 command_runner.go:130] ! I0507 19:40:53.034791       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-600000-m02"
	I0507 19:55:44.792526    5068 command_runner.go:130] ! I0507 19:40:53.035911       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-600000-m03\" does not exist"
	I0507 19:55:44.792526    5068 command_runner.go:130] ! I0507 19:40:53.048242       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-600000-m03" podCIDRs=["10.244.2.0/24"]
	I0507 19:55:44.792526    5068 command_runner.go:130] ! I0507 19:40:57.837925       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-600000-m03"
	I0507 19:55:44.792526    5068 command_runner.go:130] ! I0507 19:41:13.622605       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-600000-m02"
	I0507 19:55:44.792526    5068 command_runner.go:130] ! I0507 19:48:02.948548       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-600000-m02"
	I0507 19:55:44.792526    5068 command_runner.go:130] ! I0507 19:50:20.695158       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-600000-m02"
	I0507 19:55:44.793046    5068 command_runner.go:130] ! I0507 19:50:25.866050       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-600000-m03\" does not exist"
	I0507 19:55:44.793150    5068 command_runner.go:130] ! I0507 19:50:25.866126       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-600000-m02"
	I0507 19:55:44.793265    5068 command_runner.go:130] ! I0507 19:50:25.887459       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-600000-m03" podCIDRs=["10.244.3.0/24"]
	I0507 19:55:44.793265    5068 command_runner.go:130] ! I0507 19:50:31.631900       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-600000-m02"
	I0507 19:55:44.793361    5068 command_runner.go:130] ! I0507 19:51:58.074557       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-600000-m02"
	I0507 19:55:44.812307    5068 logs.go:123] Gathering logs for coredns [9550b237d8d7] ...
	I0507 19:55:44.812307    5068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9550b237d8d7"
	I0507 19:55:44.838724    5068 command_runner.go:130] > .:53
	I0507 19:55:44.839237    5068 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = a3820eb745a9a768a035bf81145ae0754aeb40457ffd5109db8c64dac842ada6c2edf6f9e6a410714e0f5cbc9cd90cb925a2fb37599adf58a40dc1bc5fa339b9
	I0507 19:55:44.839237    5068 command_runner.go:130] > CoreDNS-1.11.1
	I0507 19:55:44.839237    5068 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0507 19:55:44.839237    5068 command_runner.go:130] > [INFO] 127.0.0.1:52654 - 36159 "HINFO IN 3626502665556373881.284047733441029162. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.030998756s
	I0507 19:55:44.839237    5068 command_runner.go:130] > [INFO] 10.244.1.2:39771 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00031622s
	I0507 19:55:44.839237    5068 command_runner.go:130] > [INFO] 10.244.1.2:55622 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.122912472s
	I0507 19:55:44.839237    5068 command_runner.go:130] > [INFO] 10.244.1.2:43817 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.066971198s
	I0507 19:55:44.839362    5068 command_runner.go:130] > [INFO] 10.244.1.2:39650 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.458807699s
	I0507 19:55:44.839362    5068 command_runner.go:130] > [INFO] 10.244.0.3:47684 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000164311s
	I0507 19:55:44.839362    5068 command_runner.go:130] > [INFO] 10.244.0.3:35317 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.00014611s
	I0507 19:55:44.839362    5068 command_runner.go:130] > [INFO] 10.244.0.3:42135 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000170411s
	I0507 19:55:44.839362    5068 command_runner.go:130] > [INFO] 10.244.0.3:41756 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000172612s
	I0507 19:55:44.839362    5068 command_runner.go:130] > [INFO] 10.244.1.2:40802 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000169011s
	I0507 19:55:44.839362    5068 command_runner.go:130] > [INFO] 10.244.1.2:55691 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.060031941s
	I0507 19:55:44.839451    5068 command_runner.go:130] > [INFO] 10.244.1.2:46687 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000212614s
	I0507 19:55:44.839451    5068 command_runner.go:130] > [INFO] 10.244.1.2:51698 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000276418s
	I0507 19:55:44.839507    5068 command_runner.go:130] > [INFO] 10.244.1.2:40943 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.014055822s
	I0507 19:55:44.839507    5068 command_runner.go:130] > [INFO] 10.244.1.2:55853 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000128309s
	I0507 19:55:44.839540    5068 command_runner.go:130] > [INFO] 10.244.1.2:34444 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000187212s
	I0507 19:55:44.839540    5068 command_runner.go:130] > [INFO] 10.244.1.2:54956 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000091106s
	I0507 19:55:44.839540    5068 command_runner.go:130] > [INFO] 10.244.0.3:37511 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00031542s
	I0507 19:55:44.839540    5068 command_runner.go:130] > [INFO] 10.244.0.3:47331 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000061304s
	I0507 19:55:44.839604    5068 command_runner.go:130] > [INFO] 10.244.0.3:36195 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000211814s
	I0507 19:55:44.839604    5068 command_runner.go:130] > [INFO] 10.244.0.3:37240 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00014531s
	I0507 19:55:44.839604    5068 command_runner.go:130] > [INFO] 10.244.0.3:56992 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.00014411s
	I0507 19:55:44.839604    5068 command_runner.go:130] > [INFO] 10.244.0.3:53922 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000127508s
	I0507 19:55:44.839664    5068 command_runner.go:130] > [INFO] 10.244.0.3:51034 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000225815s
	I0507 19:55:44.839664    5068 command_runner.go:130] > [INFO] 10.244.0.3:45123 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000130808s
	I0507 19:55:44.839664    5068 command_runner.go:130] > [INFO] 10.244.1.2:53185 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000190512s
	I0507 19:55:44.839664    5068 command_runner.go:130] > [INFO] 10.244.1.2:47331 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000056804s
	I0507 19:55:44.839716    5068 command_runner.go:130] > [INFO] 10.244.1.2:42551 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000058104s
	I0507 19:55:44.839747    5068 command_runner.go:130] > [INFO] 10.244.1.2:47860 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000057104s
	I0507 19:55:44.839747    5068 command_runner.go:130] > [INFO] 10.244.0.3:53037 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000190312s
	I0507 19:55:44.839747    5068 command_runner.go:130] > [INFO] 10.244.0.3:60613 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000143109s
	I0507 19:55:44.839747    5068 command_runner.go:130] > [INFO] 10.244.0.3:33867 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000069105s
	I0507 19:55:44.839747    5068 command_runner.go:130] > [INFO] 10.244.0.3:40289 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00014191s
	I0507 19:55:44.839747    5068 command_runner.go:130] > [INFO] 10.244.1.2:55673 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000204514s
	I0507 19:55:44.839747    5068 command_runner.go:130] > [INFO] 10.244.1.2:46474 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000132609s
	I0507 19:55:44.839862    5068 command_runner.go:130] > [INFO] 10.244.1.2:48070 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000170211s
	I0507 19:55:44.839862    5068 command_runner.go:130] > [INFO] 10.244.1.2:56147 - 5 "PTR IN 1.128.19.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000093806s
	I0507 19:55:44.839862    5068 command_runner.go:130] > [INFO] 10.244.0.3:39426 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000107507s
	I0507 19:55:44.839907    5068 command_runner.go:130] > [INFO] 10.244.0.3:42569 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000295619s
	I0507 19:55:44.839907    5068 command_runner.go:130] > [INFO] 10.244.0.3:56970 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000267917s
	I0507 19:55:44.839907    5068 command_runner.go:130] > [INFO] 10.244.0.3:55625 - 5 "PTR IN 1.128.19.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00014751s
	I0507 19:55:44.839907    5068 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0507 19:55:44.839966    5068 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0507 19:55:47.353765    5068 api_server.go:253] Checking apiserver healthz at https://172.19.135.22:8443/healthz ...
	I0507 19:55:47.362901    5068 api_server.go:279] https://172.19.135.22:8443/healthz returned 200:
	ok
	I0507 19:55:47.363940    5068 round_trippers.go:463] GET https://172.19.135.22:8443/version
	I0507 19:55:47.363940    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:47.363986    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:47.363986    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:47.365824    5068 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0507 19:55:47.365824    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:47.365824    5068 round_trippers.go:580]     Audit-Id: 8ddf587c-e51e-40c2-b5d4-cbc8e5e7538b
	I0507 19:55:47.365824    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:47.365824    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:47.365824    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:47.365824    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:47.365824    5068 round_trippers.go:580]     Content-Length: 263
	I0507 19:55:47.365824    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:47 GMT
	I0507 19:55:47.365824    5068 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.0",
	  "gitCommit": "7c48c2bd72b9bf5c44d21d7338cc7bea77d0ad2a",
	  "gitTreeState": "clean",
	  "buildDate": "2024-04-17T17:27:03Z",
	  "goVersion": "go1.22.2",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0507 19:55:47.365824    5068 api_server.go:141] control plane version: v1.30.0
	I0507 19:55:47.366370    5068 api_server.go:131] duration metric: took 3.6207926s to wait for apiserver health ...
	I0507 19:55:47.366370    5068 system_pods.go:43] waiting for kube-system pods to appear ...
	I0507 19:55:47.373030    5068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I0507 19:55:47.398756    5068 command_runner.go:130] > 7c95e3addc4b
	I0507 19:55:47.398756    5068 logs.go:276] 1 containers: [7c95e3addc4b]
	I0507 19:55:47.404927    5068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I0507 19:55:47.430373    5068 command_runner.go:130] > ac320a872e77
	I0507 19:55:47.431602    5068 logs.go:276] 1 containers: [ac320a872e77]
	I0507 19:55:47.438605    5068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I0507 19:55:47.460396    5068 command_runner.go:130] > d27627c19808
	I0507 19:55:47.460396    5068 command_runner.go:130] > 9550b237d8d7
	I0507 19:55:47.461702    5068 logs.go:276] 2 containers: [d27627c19808 9550b237d8d7]
	I0507 19:55:47.468492    5068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I0507 19:55:47.492581    5068 command_runner.go:130] > 45341720d5be
	I0507 19:55:47.493211    5068 command_runner.go:130] > 7cefdac2050f
	I0507 19:55:47.493211    5068 logs.go:276] 2 containers: [45341720d5be 7cefdac2050f]
	I0507 19:55:47.500344    5068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I0507 19:55:47.520337    5068 command_runner.go:130] > 5255a972ff6c
	I0507 19:55:47.521150    5068 command_runner.go:130] > aa9692c1fbd3
	I0507 19:55:47.521150    5068 logs.go:276] 2 containers: [5255a972ff6c aa9692c1fbd3]
	I0507 19:55:47.529360    5068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I0507 19:55:47.550453    5068 command_runner.go:130] > 922d1e2b8745
	I0507 19:55:47.550453    5068 command_runner.go:130] > 3067f16e2e38
	I0507 19:55:47.550453    5068 logs.go:276] 2 containers: [922d1e2b8745 3067f16e2e38]
	I0507 19:55:47.561673    5068 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I0507 19:55:47.580162    5068 command_runner.go:130] > 29b5cae0b8f1
	I0507 19:55:47.580162    5068 command_runner.go:130] > 2d49ad078ed3
	I0507 19:55:47.580162    5068 logs.go:276] 2 containers: [29b5cae0b8f1 2d49ad078ed3]
	I0507 19:55:47.580162    5068 logs.go:123] Gathering logs for describe nodes ...
	I0507 19:55:47.580162    5068 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 19:55:47.743081    5068 command_runner.go:130] > Name:               multinode-600000
	I0507 19:55:47.743081    5068 command_runner.go:130] > Roles:              control-plane
	I0507 19:55:47.743081    5068 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0507 19:55:47.743081    5068 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0507 19:55:47.743081    5068 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0507 19:55:47.743081    5068 command_runner.go:130] >                     kubernetes.io/hostname=multinode-600000
	I0507 19:55:47.743081    5068 command_runner.go:130] >                     kubernetes.io/os=linux
	I0507 19:55:47.743081    5068 command_runner.go:130] >                     minikube.k8s.io/commit=a2bee053733709aad5480b65159f65519e411d9f
	I0507 19:55:47.743081    5068 command_runner.go:130] >                     minikube.k8s.io/name=multinode-600000
	I0507 19:55:47.743081    5068 command_runner.go:130] >                     minikube.k8s.io/primary=true
	I0507 19:55:47.743081    5068 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_05_07T19_33_45_0700
	I0507 19:55:47.743081    5068 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.0
	I0507 19:55:47.743081    5068 command_runner.go:130] >                     node-role.kubernetes.io/control-plane=
	I0507 19:55:47.743081    5068 command_runner.go:130] >                     node.kubernetes.io/exclude-from-external-load-balancers=
	I0507 19:55:47.743081    5068 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0507 19:55:47.743081    5068 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0507 19:55:47.743081    5068 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0507 19:55:47.743081    5068 command_runner.go:130] > CreationTimestamp:  Tue, 07 May 2024 19:33:41 +0000
	I0507 19:55:47.743081    5068 command_runner.go:130] > Taints:             <none>
	I0507 19:55:47.743081    5068 command_runner.go:130] > Unschedulable:      false
	I0507 19:55:47.743081    5068 command_runner.go:130] > Lease:
	I0507 19:55:47.743081    5068 command_runner.go:130] >   HolderIdentity:  multinode-600000
	I0507 19:55:47.743081    5068 command_runner.go:130] >   AcquireTime:     <unset>
	I0507 19:55:47.743081    5068 command_runner.go:130] >   RenewTime:       Tue, 07 May 2024 19:55:45 +0000
	I0507 19:55:47.743081    5068 command_runner.go:130] > Conditions:
	I0507 19:55:47.743081    5068 command_runner.go:130] >   Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	I0507 19:55:47.743081    5068 command_runner.go:130] >   ----             ------  -----------------                 ------------------                ------                       -------
	I0507 19:55:47.743081    5068 command_runner.go:130] >   MemoryPressure   False   Tue, 07 May 2024 19:55:09 +0000   Tue, 07 May 2024 19:33:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	I0507 19:55:47.743081    5068 command_runner.go:130] >   DiskPressure     False   Tue, 07 May 2024 19:55:09 +0000   Tue, 07 May 2024 19:33:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	I0507 19:55:47.743081    5068 command_runner.go:130] >   PIDPressure      False   Tue, 07 May 2024 19:55:09 +0000   Tue, 07 May 2024 19:33:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	I0507 19:55:47.743649    5068 command_runner.go:130] >   Ready            True    Tue, 07 May 2024 19:55:09 +0000   Tue, 07 May 2024 19:55:09 +0000   KubeletReady                 kubelet is posting ready status
	I0507 19:55:47.743649    5068 command_runner.go:130] > Addresses:
	I0507 19:55:47.743649    5068 command_runner.go:130] >   InternalIP:  172.19.135.22
	I0507 19:55:47.743709    5068 command_runner.go:130] >   Hostname:    multinode-600000
	I0507 19:55:47.743709    5068 command_runner.go:130] > Capacity:
	I0507 19:55:47.743709    5068 command_runner.go:130] >   cpu:                2
	I0507 19:55:47.743709    5068 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0507 19:55:47.743709    5068 command_runner.go:130] >   hugepages-2Mi:      0
	I0507 19:55:47.743709    5068 command_runner.go:130] >   memory:             2164264Ki
	I0507 19:55:47.743709    5068 command_runner.go:130] >   pods:               110
	I0507 19:55:47.743709    5068 command_runner.go:130] > Allocatable:
	I0507 19:55:47.743709    5068 command_runner.go:130] >   cpu:                2
	I0507 19:55:47.743709    5068 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0507 19:55:47.743709    5068 command_runner.go:130] >   hugepages-2Mi:      0
	I0507 19:55:47.743709    5068 command_runner.go:130] >   memory:             2164264Ki
	I0507 19:55:47.743709    5068 command_runner.go:130] >   pods:               110
	I0507 19:55:47.743709    5068 command_runner.go:130] > System Info:
	I0507 19:55:47.743709    5068 command_runner.go:130] >   Machine ID:                 fa6f1530e0ab4546b96ea753f13add46
	I0507 19:55:47.743709    5068 command_runner.go:130] >   System UUID:                f3433f71-57fc-a747-9f8d-4f98c0c4b458
	I0507 19:55:47.743709    5068 command_runner.go:130] >   Boot ID:                    93b81312-340b-4997-83aa-cdf61cfe3dbf
	I0507 19:55:47.743709    5068 command_runner.go:130] >   Kernel Version:             5.10.207
	I0507 19:55:47.743709    5068 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0507 19:55:47.743709    5068 command_runner.go:130] >   Operating System:           linux
	I0507 19:55:47.743709    5068 command_runner.go:130] >   Architecture:               amd64
	I0507 19:55:47.743709    5068 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0507 19:55:47.743709    5068 command_runner.go:130] >   Kubelet Version:            v1.30.0
	I0507 19:55:47.743709    5068 command_runner.go:130] >   Kube-Proxy Version:         v1.30.0
	I0507 19:55:47.743709    5068 command_runner.go:130] > PodCIDR:                      10.244.0.0/24
	I0507 19:55:47.743709    5068 command_runner.go:130] > PodCIDRs:                     10.244.0.0/24
	I0507 19:55:47.743709    5068 command_runner.go:130] > Non-terminated Pods:          (9 in total)
	I0507 19:55:47.743709    5068 command_runner.go:130] >   Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0507 19:55:47.743709    5068 command_runner.go:130] >   ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	I0507 19:55:47.743709    5068 command_runner.go:130] >   default                     busybox-fc5497c4f-gcqlv                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	I0507 19:55:47.743709    5068 command_runner.go:130] >   kube-system                 coredns-7db6d8ff4d-5j966                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     21m
	I0507 19:55:47.743709    5068 command_runner.go:130] >   kube-system                 etcd-multinode-600000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         74s
	I0507 19:55:47.743709    5068 command_runner.go:130] >   kube-system                 kindnet-zw4r9                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      21m
	I0507 19:55:47.743709    5068 command_runner.go:130] >   kube-system                 kube-apiserver-multinode-600000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         72s
	I0507 19:55:47.743709    5068 command_runner.go:130] >   kube-system                 kube-controller-manager-multinode-600000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	I0507 19:55:47.743709    5068 command_runner.go:130] >   kube-system                 kube-proxy-c9gw5                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	I0507 19:55:47.743709    5068 command_runner.go:130] >   kube-system                 kube-scheduler-multinode-600000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	I0507 19:55:47.743709    5068 command_runner.go:130] >   kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21m
	I0507 19:55:47.743709    5068 command_runner.go:130] > Allocated resources:
	I0507 19:55:47.743709    5068 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0507 19:55:47.743709    5068 command_runner.go:130] >   Resource           Requests     Limits
	I0507 19:55:47.744237    5068 command_runner.go:130] >   --------           --------     ------
	I0507 19:55:47.744237    5068 command_runner.go:130] >   cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	I0507 19:55:47.744237    5068 command_runner.go:130] >   memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	I0507 19:55:47.744237    5068 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0507 19:55:47.744237    5068 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	I0507 19:55:47.744237    5068 command_runner.go:130] > Events:
	I0507 19:55:47.744237    5068 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0507 19:55:47.744331    5068 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0507 19:55:47.744331    5068 command_runner.go:130] >   Normal  Starting                 21m                kube-proxy       
	I0507 19:55:47.744331    5068 command_runner.go:130] >   Normal  Starting                 72s                kube-proxy       
	I0507 19:55:47.744331    5068 command_runner.go:130] >   Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node multinode-600000 status is now: NodeHasSufficientMemory
	I0507 19:55:47.744403    5068 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node multinode-600000 status is now: NodeHasNoDiskPressure
	I0507 19:55:47.744431    5068 command_runner.go:130] >   Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node multinode-600000 status is now: NodeHasSufficientPID
	I0507 19:55:47.744473    5068 command_runner.go:130] >   Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	I0507 19:55:47.744473    5068 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    22m                kubelet          Node multinode-600000 status is now: NodeHasNoDiskPressure
	I0507 19:55:47.744473    5068 command_runner.go:130] >   Normal  NodeHasSufficientMemory  22m                kubelet          Node multinode-600000 status is now: NodeHasSufficientMemory
	I0507 19:55:47.744473    5068 command_runner.go:130] >   Normal  NodeHasSufficientPID     22m                kubelet          Node multinode-600000 status is now: NodeHasSufficientPID
	I0507 19:55:47.744570    5068 command_runner.go:130] >   Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	I0507 19:55:47.744570    5068 command_runner.go:130] >   Normal  Starting                 22m                kubelet          Starting kubelet.
	I0507 19:55:47.744613    5068 command_runner.go:130] >   Normal  RegisteredNode           21m                node-controller  Node multinode-600000 event: Registered Node multinode-600000 in Controller
	I0507 19:55:47.744650    5068 command_runner.go:130] >   Normal  NodeReady                21m                kubelet          Node multinode-600000 status is now: NodeReady
	I0507 19:55:47.744667    5068 command_runner.go:130] >   Normal  Starting                 79s                kubelet          Starting kubelet.
	I0507 19:55:47.744667    5068 command_runner.go:130] >   Normal  NodeHasSufficientPID     79s (x7 over 79s)  kubelet          Node multinode-600000 status is now: NodeHasSufficientPID
	I0507 19:55:47.744709    5068 command_runner.go:130] >   Normal  NodeAllocatableEnforced  79s                kubelet          Updated Node Allocatable limit across pods
	I0507 19:55:47.744745    5068 command_runner.go:130] >   Normal  NodeHasSufficientMemory  78s (x8 over 79s)  kubelet          Node multinode-600000 status is now: NodeHasSufficientMemory
	I0507 19:55:47.744761    5068 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    78s (x8 over 79s)  kubelet          Node multinode-600000 status is now: NodeHasNoDiskPressure
	I0507 19:55:47.744823    5068 command_runner.go:130] >   Normal  RegisteredNode           61s                node-controller  Node multinode-600000 event: Registered Node multinode-600000 in Controller
	I0507 19:55:47.744823    5068 command_runner.go:130] > Name:               multinode-600000-m02
	I0507 19:55:47.744849    5068 command_runner.go:130] > Roles:              <none>
	I0507 19:55:47.744849    5068 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0507 19:55:47.744894    5068 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0507 19:55:47.744894    5068 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0507 19:55:47.744894    5068 command_runner.go:130] >                     kubernetes.io/hostname=multinode-600000-m02
	I0507 19:55:47.744894    5068 command_runner.go:130] >                     kubernetes.io/os=linux
	I0507 19:55:47.744894    5068 command_runner.go:130] >                     minikube.k8s.io/commit=a2bee053733709aad5480b65159f65519e411d9f
	I0507 19:55:47.744965    5068 command_runner.go:130] >                     minikube.k8s.io/name=multinode-600000
	I0507 19:55:47.744965    5068 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0507 19:55:47.744965    5068 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_05_07T19_36_40_0700
	I0507 19:55:47.744965    5068 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.0
	I0507 19:55:47.744965    5068 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0507 19:55:47.744965    5068 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0507 19:55:47.745066    5068 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0507 19:55:47.745066    5068 command_runner.go:130] > CreationTimestamp:  Tue, 07 May 2024 19:36:39 +0000
	I0507 19:55:47.745066    5068 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0507 19:55:47.745066    5068 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0507 19:55:47.745066    5068 command_runner.go:130] > Unschedulable:      false
	I0507 19:55:47.745066    5068 command_runner.go:130] > Lease:
	I0507 19:55:47.745066    5068 command_runner.go:130] >   HolderIdentity:  multinode-600000-m02
	I0507 19:55:47.745156    5068 command_runner.go:130] >   AcquireTime:     <unset>
	I0507 19:55:47.745156    5068 command_runner.go:130] >   RenewTime:       Tue, 07 May 2024 19:51:38 +0000
	I0507 19:55:47.745156    5068 command_runner.go:130] > Conditions:
	I0507 19:55:47.745156    5068 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0507 19:55:47.745156    5068 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0507 19:55:47.745156    5068 command_runner.go:130] >   MemoryPressure   Unknown   Tue, 07 May 2024 19:47:54 +0000   Tue, 07 May 2024 19:55:26 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0507 19:55:47.745279    5068 command_runner.go:130] >   DiskPressure     Unknown   Tue, 07 May 2024 19:47:54 +0000   Tue, 07 May 2024 19:55:26 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0507 19:55:47.745279    5068 command_runner.go:130] >   PIDPressure      Unknown   Tue, 07 May 2024 19:47:54 +0000   Tue, 07 May 2024 19:55:26 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0507 19:55:47.745279    5068 command_runner.go:130] >   Ready            Unknown   Tue, 07 May 2024 19:47:54 +0000   Tue, 07 May 2024 19:55:26 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0507 19:55:47.745279    5068 command_runner.go:130] > Addresses:
	I0507 19:55:47.745279    5068 command_runner.go:130] >   InternalIP:  172.19.143.144
	I0507 19:55:47.745279    5068 command_runner.go:130] >   Hostname:    multinode-600000-m02
	I0507 19:55:47.745370    5068 command_runner.go:130] > Capacity:
	I0507 19:55:47.745370    5068 command_runner.go:130] >   cpu:                2
	I0507 19:55:47.745370    5068 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0507 19:55:47.745370    5068 command_runner.go:130] >   hugepages-2Mi:      0
	I0507 19:55:47.745370    5068 command_runner.go:130] >   memory:             2164264Ki
	I0507 19:55:47.745370    5068 command_runner.go:130] >   pods:               110
	I0507 19:55:47.745370    5068 command_runner.go:130] > Allocatable:
	I0507 19:55:47.745370    5068 command_runner.go:130] >   cpu:                2
	I0507 19:55:47.745370    5068 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0507 19:55:47.745461    5068 command_runner.go:130] >   hugepages-2Mi:      0
	I0507 19:55:47.745461    5068 command_runner.go:130] >   memory:             2164264Ki
	I0507 19:55:47.745461    5068 command_runner.go:130] >   pods:               110
	I0507 19:55:47.745461    5068 command_runner.go:130] > System Info:
	I0507 19:55:47.745461    5068 command_runner.go:130] >   Machine ID:                 34eb4e78cde1423b93517d0087c85f3c
	I0507 19:55:47.745461    5068 command_runner.go:130] >   System UUID:                7ed694c3-4cb4-954c-b244-d0ff36461420
	I0507 19:55:47.745543    5068 command_runner.go:130] >   Boot ID:                    6dd39eeb-a923-4a09-93c8-8c26dd122d68
	I0507 19:55:47.745543    5068 command_runner.go:130] >   Kernel Version:             5.10.207
	I0507 19:55:47.745543    5068 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0507 19:55:47.745543    5068 command_runner.go:130] >   Operating System:           linux
	I0507 19:55:47.745543    5068 command_runner.go:130] >   Architecture:               amd64
	I0507 19:55:47.745631    5068 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0507 19:55:47.745631    5068 command_runner.go:130] >   Kubelet Version:            v1.30.0
	I0507 19:55:47.745631    5068 command_runner.go:130] >   Kube-Proxy Version:         v1.30.0
	I0507 19:55:47.745631    5068 command_runner.go:130] > PodCIDR:                      10.244.1.0/24
	I0507 19:55:47.745631    5068 command_runner.go:130] > PodCIDRs:                     10.244.1.0/24
	I0507 19:55:47.745631    5068 command_runner.go:130] > Non-terminated Pods:          (3 in total)
	I0507 19:55:47.745631    5068 command_runner.go:130] >   Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0507 19:55:47.745722    5068 command_runner.go:130] >   ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	I0507 19:55:47.745722    5068 command_runner.go:130] >   default                     busybox-fc5497c4f-cpw2r    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	I0507 19:55:47.745722    5068 command_runner.go:130] >   kube-system                 kindnet-jmlw2              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      19m
	I0507 19:55:47.745804    5068 command_runner.go:130] >   kube-system                 kube-proxy-9fb6t           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	I0507 19:55:47.745804    5068 command_runner.go:130] > Allocated resources:
	I0507 19:55:47.745804    5068 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0507 19:55:47.745804    5068 command_runner.go:130] >   Resource           Requests   Limits
	I0507 19:55:47.745804    5068 command_runner.go:130] >   --------           --------   ------
	I0507 19:55:47.745804    5068 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0507 19:55:47.745890    5068 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0507 19:55:47.745890    5068 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0507 19:55:47.745890    5068 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0507 19:55:47.745890    5068 command_runner.go:130] > Events:
	I0507 19:55:47.745890    5068 command_runner.go:130] >   Type    Reason                   Age                From             Message
	I0507 19:55:47.745890    5068 command_runner.go:130] >   ----    ------                   ----               ----             -------
	I0507 19:55:47.745979    5068 command_runner.go:130] >   Normal  Starting                 18m                kube-proxy       
	I0507 19:55:47.745979    5068 command_runner.go:130] >   Normal  NodeHasSufficientMemory  19m (x2 over 19m)  kubelet          Node multinode-600000-m02 status is now: NodeHasSufficientMemory
	I0507 19:55:47.745979    5068 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    19m (x2 over 19m)  kubelet          Node multinode-600000-m02 status is now: NodeHasNoDiskPressure
	I0507 19:55:47.745979    5068 command_runner.go:130] >   Normal  NodeHasSufficientPID     19m (x2 over 19m)  kubelet          Node multinode-600000-m02 status is now: NodeHasSufficientPID
	I0507 19:55:47.746059    5068 command_runner.go:130] >   Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	I0507 19:55:47.746059    5068 command_runner.go:130] >   Normal  RegisteredNode           19m                node-controller  Node multinode-600000-m02 event: Registered Node multinode-600000-m02 in Controller
	I0507 19:55:47.746059    5068 command_runner.go:130] >   Normal  NodeReady                18m                kubelet          Node multinode-600000-m02 status is now: NodeReady
	I0507 19:55:47.746151    5068 command_runner.go:130] >   Normal  RegisteredNode           61s                node-controller  Node multinode-600000-m02 event: Registered Node multinode-600000-m02 in Controller
	I0507 19:55:47.746151    5068 command_runner.go:130] >   Normal  NodeNotReady             21s                node-controller  Node multinode-600000-m02 status is now: NodeNotReady
	I0507 19:55:47.746151    5068 command_runner.go:130] > Name:               multinode-600000-m03
	I0507 19:55:47.746151    5068 command_runner.go:130] > Roles:              <none>
	I0507 19:55:47.746151    5068 command_runner.go:130] > Labels:             beta.kubernetes.io/arch=amd64
	I0507 19:55:47.746151    5068 command_runner.go:130] >                     beta.kubernetes.io/os=linux
	I0507 19:55:47.746234    5068 command_runner.go:130] >                     kubernetes.io/arch=amd64
	I0507 19:55:47.746281    5068 command_runner.go:130] >                     kubernetes.io/hostname=multinode-600000-m03
	I0507 19:55:47.746281    5068 command_runner.go:130] >                     kubernetes.io/os=linux
	I0507 19:55:47.746281    5068 command_runner.go:130] >                     minikube.k8s.io/commit=a2bee053733709aad5480b65159f65519e411d9f
	I0507 19:55:47.746325    5068 command_runner.go:130] >                     minikube.k8s.io/name=multinode-600000
	I0507 19:55:47.746325    5068 command_runner.go:130] >                     minikube.k8s.io/primary=false
	I0507 19:55:47.746355    5068 command_runner.go:130] >                     minikube.k8s.io/updated_at=2024_05_07T19_50_26_0700
	I0507 19:55:47.746355    5068 command_runner.go:130] >                     minikube.k8s.io/version=v1.33.0
	I0507 19:55:47.746355    5068 command_runner.go:130] > Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	I0507 19:55:47.746355    5068 command_runner.go:130] >                     node.alpha.kubernetes.io/ttl: 0
	I0507 19:55:47.746355    5068 command_runner.go:130] >                     volumes.kubernetes.io/controller-managed-attach-detach: true
	I0507 19:55:47.746438    5068 command_runner.go:130] > CreationTimestamp:  Tue, 07 May 2024 19:50:25 +0000
	I0507 19:55:47.746438    5068 command_runner.go:130] > Taints:             node.kubernetes.io/unreachable:NoExecute
	I0507 19:55:47.746438    5068 command_runner.go:130] >                     node.kubernetes.io/unreachable:NoSchedule
	I0507 19:55:47.746438    5068 command_runner.go:130] > Unschedulable:      false
	I0507 19:55:47.746438    5068 command_runner.go:130] > Lease:
	I0507 19:55:47.746438    5068 command_runner.go:130] >   HolderIdentity:  multinode-600000-m03
	I0507 19:55:47.746438    5068 command_runner.go:130] >   AcquireTime:     <unset>
	I0507 19:55:47.746525    5068 command_runner.go:130] >   RenewTime:       Tue, 07 May 2024 19:51:16 +0000
	I0507 19:55:47.746525    5068 command_runner.go:130] > Conditions:
	I0507 19:55:47.746525    5068 command_runner.go:130] >   Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	I0507 19:55:47.746525    5068 command_runner.go:130] >   ----             ------    -----------------                 ------------------                ------              -------
	I0507 19:55:47.746525    5068 command_runner.go:130] >   MemoryPressure   Unknown   Tue, 07 May 2024 19:50:31 +0000   Tue, 07 May 2024 19:51:58 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0507 19:55:47.746610    5068 command_runner.go:130] >   DiskPressure     Unknown   Tue, 07 May 2024 19:50:31 +0000   Tue, 07 May 2024 19:51:58 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0507 19:55:47.746610    5068 command_runner.go:130] >   PIDPressure      Unknown   Tue, 07 May 2024 19:50:31 +0000   Tue, 07 May 2024 19:51:58 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0507 19:55:47.746610    5068 command_runner.go:130] >   Ready            Unknown   Tue, 07 May 2024 19:50:31 +0000   Tue, 07 May 2024 19:51:58 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	I0507 19:55:47.746905    5068 command_runner.go:130] > Addresses:
	I0507 19:55:47.747010    5068 command_runner.go:130] >   InternalIP:  172.19.129.4
	I0507 19:55:47.747010    5068 command_runner.go:130] >   Hostname:    multinode-600000-m03
	I0507 19:55:47.747104    5068 command_runner.go:130] > Capacity:
	I0507 19:55:47.747150    5068 command_runner.go:130] >   cpu:                2
	I0507 19:55:47.747150    5068 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0507 19:55:47.747150    5068 command_runner.go:130] >   hugepages-2Mi:      0
	I0507 19:55:47.747150    5068 command_runner.go:130] >   memory:             2164264Ki
	I0507 19:55:47.747150    5068 command_runner.go:130] >   pods:               110
	I0507 19:55:47.747150    5068 command_runner.go:130] > Allocatable:
	I0507 19:55:47.747150    5068 command_runner.go:130] >   cpu:                2
	I0507 19:55:47.747150    5068 command_runner.go:130] >   ephemeral-storage:  17734596Ki
	I0507 19:55:47.747150    5068 command_runner.go:130] >   hugepages-2Mi:      0
	I0507 19:55:47.747150    5068 command_runner.go:130] >   memory:             2164264Ki
	I0507 19:55:47.747150    5068 command_runner.go:130] >   pods:               110
	I0507 19:55:47.747150    5068 command_runner.go:130] > System Info:
	I0507 19:55:47.747150    5068 command_runner.go:130] >   Machine ID:                 380df77fae65410dba19d02344fea647
	I0507 19:55:47.747150    5068 command_runner.go:130] >   System UUID:                ed9d4a55-0088-004e-addb-543af9e02720
	I0507 19:55:47.747150    5068 command_runner.go:130] >   Boot ID:                    e0ec4add-64d0-47e3-9547-3261cfbddd3a
	I0507 19:55:47.747150    5068 command_runner.go:130] >   Kernel Version:             5.10.207
	I0507 19:55:47.747150    5068 command_runner.go:130] >   OS Image:                   Buildroot 2023.02.9
	I0507 19:55:47.747150    5068 command_runner.go:130] >   Operating System:           linux
	I0507 19:55:47.747150    5068 command_runner.go:130] >   Architecture:               amd64
	I0507 19:55:47.747150    5068 command_runner.go:130] >   Container Runtime Version:  docker://26.0.2
	I0507 19:55:47.747150    5068 command_runner.go:130] >   Kubelet Version:            v1.30.0
	I0507 19:55:47.747150    5068 command_runner.go:130] >   Kube-Proxy Version:         v1.30.0
	I0507 19:55:47.747150    5068 command_runner.go:130] > PodCIDR:                      10.244.3.0/24
	I0507 19:55:47.747150    5068 command_runner.go:130] > PodCIDRs:                     10.244.3.0/24
	I0507 19:55:47.747150    5068 command_runner.go:130] > Non-terminated Pods:          (2 in total)
	I0507 19:55:47.747150    5068 command_runner.go:130] >   Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	I0507 19:55:47.747150    5068 command_runner.go:130] >   ---------                   ----                ------------  ----------  ---------------  -------------  ---
	I0507 19:55:47.747680    5068 command_runner.go:130] >   kube-system                 kindnet-dkxzt       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	I0507 19:55:47.747680    5068 command_runner.go:130] >   kube-system                 kube-proxy-pzn8q    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	I0507 19:55:47.747680    5068 command_runner.go:130] > Allocated resources:
	I0507 19:55:47.747793    5068 command_runner.go:130] >   (Total limits may be over 100 percent, i.e., overcommitted.)
	I0507 19:55:47.747793    5068 command_runner.go:130] >   Resource           Requests   Limits
	I0507 19:55:47.747793    5068 command_runner.go:130] >   --------           --------   ------
	I0507 19:55:47.747883    5068 command_runner.go:130] >   cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	I0507 19:55:47.747883    5068 command_runner.go:130] >   memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	I0507 19:55:47.747883    5068 command_runner.go:130] >   ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0507 19:55:47.747998    5068 command_runner.go:130] >   hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	I0507 19:55:47.747998    5068 command_runner.go:130] > Events:
	I0507 19:55:47.747998    5068 command_runner.go:130] >   Type    Reason                   Age                    From             Message
	I0507 19:55:47.748057    5068 command_runner.go:130] >   ----    ------                   ----                   ----             -------
	I0507 19:55:47.748108    5068 command_runner.go:130] >   Normal  Starting                 5m18s                  kube-proxy       
	I0507 19:55:47.748108    5068 command_runner.go:130] >   Normal  Starting                 14m                    kube-proxy       
	I0507 19:55:47.748182    5068 command_runner.go:130] >   Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	I0507 19:55:47.748182    5068 command_runner.go:130] >   Normal  NodeHasSufficientMemory  14m (x2 over 14m)      kubelet          Node multinode-600000-m03 status is now: NodeHasSufficientMemory
	I0507 19:55:47.748258    5068 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    14m (x2 over 14m)      kubelet          Node multinode-600000-m03 status is now: NodeHasNoDiskPressure
	I0507 19:55:47.748258    5068 command_runner.go:130] >   Normal  NodeHasSufficientPID     14m (x2 over 14m)      kubelet          Node multinode-600000-m03 status is now: NodeHasSufficientPID
	I0507 19:55:47.748342    5068 command_runner.go:130] >   Normal  NodeReady                14m                    kubelet          Node multinode-600000-m03 status is now: NodeReady
	I0507 19:55:47.748421    5068 command_runner.go:130] >   Normal  NodeHasSufficientMemory  5m22s (x2 over 5m22s)  kubelet          Node multinode-600000-m03 status is now: NodeHasSufficientMemory
	I0507 19:55:47.748421    5068 command_runner.go:130] >   Normal  NodeHasNoDiskPressure    5m22s (x2 over 5m22s)  kubelet          Node multinode-600000-m03 status is now: NodeHasNoDiskPressure
	I0507 19:55:47.748499    5068 command_runner.go:130] >   Normal  NodeHasSufficientPID     5m22s (x2 over 5m22s)  kubelet          Node multinode-600000-m03 status is now: NodeHasSufficientPID
	I0507 19:55:47.748499    5068 command_runner.go:130] >   Normal  NodeAllocatableEnforced  5m22s                  kubelet          Updated Node Allocatable limit across pods
	I0507 19:55:47.748574    5068 command_runner.go:130] >   Normal  RegisteredNode           5m19s                  node-controller  Node multinode-600000-m03 event: Registered Node multinode-600000-m03 in Controller
	I0507 19:55:47.748574    5068 command_runner.go:130] >   Normal  NodeReady                5m16s                  kubelet          Node multinode-600000-m03 status is now: NodeReady
	I0507 19:55:47.748650    5068 command_runner.go:130] >   Normal  NodeNotReady             3m49s                  node-controller  Node multinode-600000-m03 status is now: NodeNotReady
	I0507 19:55:47.748736    5068 command_runner.go:130] >   Normal  RegisteredNode           61s                    node-controller  Node multinode-600000-m03 event: Registered Node multinode-600000-m03 in Controller
	I0507 19:55:47.758210    5068 logs.go:123] Gathering logs for kube-apiserver [7c95e3addc4b] ...
	I0507 19:55:47.758210    5068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7c95e3addc4b"
	I0507 19:55:47.783703    5068 command_runner.go:130] ! I0507 19:54:30.988770       1 options.go:221] external host was not specified, using 172.19.135.22
	I0507 19:55:47.783703    5068 command_runner.go:130] ! I0507 19:54:30.995893       1 server.go:148] Version: v1.30.0
	I0507 19:55:47.783703    5068 command_runner.go:130] ! I0507 19:54:30.996132       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0507 19:55:47.783703    5068 command_runner.go:130] ! I0507 19:54:31.800337       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0507 19:55:47.783783    5068 command_runner.go:130] ! I0507 19:54:31.800374       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0507 19:55:47.783783    5068 command_runner.go:130] ! I0507 19:54:31.801064       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0507 19:55:47.783851    5068 command_runner.go:130] ! I0507 19:54:31.801131       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0507 19:55:47.783851    5068 command_runner.go:130] ! I0507 19:54:31.801553       1 instance.go:299] Using reconciler: lease
	I0507 19:55:47.783851    5068 command_runner.go:130] ! I0507 19:54:32.352039       1 handler.go:286] Adding GroupVersion apiextensions.k8s.io v1 to ResourceManager
	I0507 19:55:47.783851    5068 command_runner.go:130] ! W0507 19:54:32.352075       1 genericapiserver.go:733] Skipping API apiextensions.k8s.io/v1beta1 because it has no resources.
	I0507 19:55:47.783851    5068 command_runner.go:130] ! I0507 19:54:32.609708       1 handler.go:286] Adding GroupVersion  v1 to ResourceManager
	I0507 19:55:47.783851    5068 command_runner.go:130] ! I0507 19:54:32.610006       1 instance.go:696] API group "internal.apiserver.k8s.io" is not enabled, skipping.
	I0507 19:55:47.783851    5068 command_runner.go:130] ! I0507 19:54:32.836522       1 instance.go:696] API group "storagemigration.k8s.io" is not enabled, skipping.
	I0507 19:55:47.783983    5068 command_runner.go:130] ! I0507 19:54:32.999148       1 instance.go:696] API group "resource.k8s.io" is not enabled, skipping.
	I0507 19:55:47.783983    5068 command_runner.go:130] ! I0507 19:54:33.030018       1 handler.go:286] Adding GroupVersion authentication.k8s.io v1 to ResourceManager
	I0507 19:55:47.783983    5068 command_runner.go:130] ! W0507 19:54:33.030136       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1beta1 because it has no resources.
	I0507 19:55:47.783983    5068 command_runner.go:130] ! W0507 19:54:33.030146       1 genericapiserver.go:733] Skipping API authentication.k8s.io/v1alpha1 because it has no resources.
	I0507 19:55:47.784054    5068 command_runner.go:130] ! I0507 19:54:33.030562       1 handler.go:286] Adding GroupVersion authorization.k8s.io v1 to ResourceManager
	I0507 19:55:47.784054    5068 command_runner.go:130] ! W0507 19:54:33.030671       1 genericapiserver.go:733] Skipping API authorization.k8s.io/v1beta1 because it has no resources.
	I0507 19:55:47.784054    5068 command_runner.go:130] ! I0507 19:54:33.031835       1 handler.go:286] Adding GroupVersion autoscaling v2 to ResourceManager
	I0507 19:55:47.784054    5068 command_runner.go:130] ! I0507 19:54:33.032596       1 handler.go:286] Adding GroupVersion autoscaling v1 to ResourceManager
	I0507 19:55:47.784054    5068 command_runner.go:130] ! W0507 19:54:33.032785       1 genericapiserver.go:733] Skipping API autoscaling/v2beta1 because it has no resources.
	I0507 19:55:47.784125    5068 command_runner.go:130] ! W0507 19:54:33.032807       1 genericapiserver.go:733] Skipping API autoscaling/v2beta2 because it has no resources.
	I0507 19:55:47.784125    5068 command_runner.go:130] ! I0507 19:54:33.034337       1 handler.go:286] Adding GroupVersion batch v1 to ResourceManager
	I0507 19:55:47.784125    5068 command_runner.go:130] ! W0507 19:54:33.034455       1 genericapiserver.go:733] Skipping API batch/v1beta1 because it has no resources.
	I0507 19:55:47.784125    5068 command_runner.go:130] ! I0507 19:54:33.035255       1 handler.go:286] Adding GroupVersion certificates.k8s.io v1 to ResourceManager
	I0507 19:55:47.784125    5068 command_runner.go:130] ! W0507 19:54:33.035288       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1beta1 because it has no resources.
	I0507 19:55:47.784194    5068 command_runner.go:130] ! W0507 19:54:33.035294       1 genericapiserver.go:733] Skipping API certificates.k8s.io/v1alpha1 because it has no resources.
	I0507 19:55:47.784194    5068 command_runner.go:130] ! I0507 19:54:33.035838       1 handler.go:286] Adding GroupVersion coordination.k8s.io v1 to ResourceManager
	I0507 19:55:47.784194    5068 command_runner.go:130] ! W0507 19:54:33.035918       1 genericapiserver.go:733] Skipping API coordination.k8s.io/v1beta1 because it has no resources.
	I0507 19:55:47.784194    5068 command_runner.go:130] ! W0507 19:54:33.035968       1 genericapiserver.go:733] Skipping API discovery.k8s.io/v1beta1 because it has no resources.
	I0507 19:55:47.784194    5068 command_runner.go:130] ! I0507 19:54:33.036453       1 handler.go:286] Adding GroupVersion discovery.k8s.io v1 to ResourceManager
	I0507 19:55:47.784265    5068 command_runner.go:130] ! I0507 19:54:33.038094       1 handler.go:286] Adding GroupVersion networking.k8s.io v1 to ResourceManager
	I0507 19:55:47.784265    5068 command_runner.go:130] ! W0507 19:54:33.038196       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1beta1 because it has no resources.
	I0507 19:55:47.784265    5068 command_runner.go:130] ! W0507 19:54:33.038204       1 genericapiserver.go:733] Skipping API networking.k8s.io/v1alpha1 because it has no resources.
	I0507 19:55:47.784265    5068 command_runner.go:130] ! I0507 19:54:33.038675       1 handler.go:286] Adding GroupVersion node.k8s.io v1 to ResourceManager
	I0507 19:55:47.784332    5068 command_runner.go:130] ! W0507 19:54:33.038880       1 genericapiserver.go:733] Skipping API node.k8s.io/v1beta1 because it has no resources.
	I0507 19:55:47.784332    5068 command_runner.go:130] ! W0507 19:54:33.038891       1 genericapiserver.go:733] Skipping API node.k8s.io/v1alpha1 because it has no resources.
	I0507 19:55:47.784332    5068 command_runner.go:130] ! I0507 19:54:33.039628       1 handler.go:286] Adding GroupVersion policy v1 to ResourceManager
	I0507 19:55:47.784332    5068 command_runner.go:130] ! W0507 19:54:33.039798       1 genericapiserver.go:733] Skipping API policy/v1beta1 because it has no resources.
	I0507 19:55:47.784332    5068 command_runner.go:130] ! I0507 19:54:33.041524       1 handler.go:286] Adding GroupVersion rbac.authorization.k8s.io v1 to ResourceManager
	I0507 19:55:47.784400    5068 command_runner.go:130] ! W0507 19:54:33.041621       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1beta1 because it has no resources.
	I0507 19:55:47.784400    5068 command_runner.go:130] ! W0507 19:54:33.041630       1 genericapiserver.go:733] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
	I0507 19:55:47.784400    5068 command_runner.go:130] ! I0507 19:54:33.042180       1 handler.go:286] Adding GroupVersion scheduling.k8s.io v1 to ResourceManager
	I0507 19:55:47.784400    5068 command_runner.go:130] ! W0507 19:54:33.042199       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1beta1 because it has no resources.
	I0507 19:55:47.784469    5068 command_runner.go:130] ! W0507 19:54:33.042204       1 genericapiserver.go:733] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
	I0507 19:55:47.784469    5068 command_runner.go:130] ! I0507 19:54:33.044893       1 handler.go:286] Adding GroupVersion storage.k8s.io v1 to ResourceManager
	I0507 19:55:47.784469    5068 command_runner.go:130] ! W0507 19:54:33.045016       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1beta1 because it has no resources.
	I0507 19:55:47.784469    5068 command_runner.go:130] ! W0507 19:54:33.045025       1 genericapiserver.go:733] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
	I0507 19:55:47.784523    5068 command_runner.go:130] ! I0507 19:54:33.046333       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1 to ResourceManager
	I0507 19:55:47.784523    5068 command_runner.go:130] ! I0507 19:54:33.047629       1 handler.go:286] Adding GroupVersion flowcontrol.apiserver.k8s.io v1beta3 to ResourceManager
	I0507 19:55:47.784556    5068 command_runner.go:130] ! W0507 19:54:33.047767       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta2 because it has no resources.
	I0507 19:55:47.784556    5068 command_runner.go:130] ! W0507 19:54:33.047776       1 genericapiserver.go:733] Skipping API flowcontrol.apiserver.k8s.io/v1beta1 because it has no resources.
	I0507 19:55:47.784587    5068 command_runner.go:130] ! I0507 19:54:33.052196       1 handler.go:286] Adding GroupVersion apps v1 to ResourceManager
	I0507 19:55:47.784603    5068 command_runner.go:130] ! W0507 19:54:33.052296       1 genericapiserver.go:733] Skipping API apps/v1beta2 because it has no resources.
	I0507 19:55:47.784603    5068 command_runner.go:130] ! W0507 19:54:33.052305       1 genericapiserver.go:733] Skipping API apps/v1beta1 because it has no resources.
	I0507 19:55:47.784603    5068 command_runner.go:130] ! I0507 19:54:33.054428       1 handler.go:286] Adding GroupVersion admissionregistration.k8s.io v1 to ResourceManager
	I0507 19:55:47.784603    5068 command_runner.go:130] ! W0507 19:54:33.054530       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1beta1 because it has no resources.
	I0507 19:55:47.784665    5068 command_runner.go:130] ! W0507 19:54:33.054538       1 genericapiserver.go:733] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
	I0507 19:55:47.784665    5068 command_runner.go:130] ! I0507 19:54:33.055154       1 handler.go:286] Adding GroupVersion events.k8s.io v1 to ResourceManager
	I0507 19:55:47.784665    5068 command_runner.go:130] ! W0507 19:54:33.055244       1 genericapiserver.go:733] Skipping API events.k8s.io/v1beta1 because it has no resources.
	I0507 19:55:47.784665    5068 command_runner.go:130] ! I0507 19:54:33.069859       1 handler.go:286] Adding GroupVersion apiregistration.k8s.io v1 to ResourceManager
	I0507 19:55:47.784734    5068 command_runner.go:130] ! W0507 19:54:33.070043       1 genericapiserver.go:733] Skipping API apiregistration.k8s.io/v1beta1 because it has no resources.
	I0507 19:55:47.784734    5068 command_runner.go:130] ! I0507 19:54:33.594507       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0507 19:55:47.784734    5068 command_runner.go:130] ! I0507 19:54:33.594682       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0507 19:55:47.784802    5068 command_runner.go:130] ! I0507 19:54:33.595540       1 dynamic_serving_content.go:132] "Starting controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0507 19:55:47.784802    5068 command_runner.go:130] ! I0507 19:54:33.595924       1 secure_serving.go:213] Serving securely on [::]:8443
	I0507 19:55:47.784802    5068 command_runner.go:130] ! I0507 19:54:33.596143       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0507 19:55:47.784802    5068 command_runner.go:130] ! I0507 19:54:33.596346       1 cluster_authentication_trust_controller.go:440] Starting cluster_authentication_trust_controller controller
	I0507 19:55:47.784873    5068 command_runner.go:130] ! I0507 19:54:33.596374       1 shared_informer.go:313] Waiting for caches to sync for cluster_authentication_trust_controller
	I0507 19:55:47.784873    5068 command_runner.go:130] ! I0507 19:54:33.598256       1 available_controller.go:423] Starting AvailableConditionController
	I0507 19:55:47.784873    5068 command_runner.go:130] ! I0507 19:54:33.598413       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
	I0507 19:55:47.784873    5068 command_runner.go:130] ! I0507 19:54:33.598667       1 customresource_discovery_controller.go:289] Starting DiscoveryController
	I0507 19:55:47.784873    5068 command_runner.go:130] ! I0507 19:54:33.598950       1 controller.go:116] Starting legacy_token_tracking_controller
	I0507 19:55:47.784941    5068 command_runner.go:130] ! I0507 19:54:33.599041       1 shared_informer.go:313] Waiting for caches to sync for configmaps
	I0507 19:55:47.784941    5068 command_runner.go:130] ! I0507 19:54:33.599147       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0507 19:55:47.784941    5068 command_runner.go:130] ! I0507 19:54:33.599437       1 apf_controller.go:374] Starting API Priority and Fairness config controller
	I0507 19:55:47.784941    5068 command_runner.go:130] ! I0507 19:54:33.600282       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0507 19:55:47.785007    5068 command_runner.go:130] ! I0507 19:54:33.600293       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0507 19:55:47.785007    5068 command_runner.go:130] ! I0507 19:54:33.600310       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0507 19:55:47.785007    5068 command_runner.go:130] ! I0507 19:54:33.600988       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0507 19:55:47.785007    5068 command_runner.go:130] ! I0507 19:54:33.601389       1 aggregator.go:163] waiting for initial CRD sync...
	I0507 19:55:47.785007    5068 command_runner.go:130] ! I0507 19:54:33.601406       1 controller.go:78] Starting OpenAPI AggregationController
	I0507 19:55:47.785086    5068 command_runner.go:130] ! I0507 19:54:33.601452       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0507 19:55:47.785086    5068 command_runner.go:130] ! I0507 19:54:33.601517       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0507 19:55:47.785086    5068 command_runner.go:130] ! I0507 19:54:33.603473       1 controller.go:139] Starting OpenAPI controller
	I0507 19:55:47.785086    5068 command_runner.go:130] ! I0507 19:54:33.603607       1 controller.go:87] Starting OpenAPI V3 controller
	I0507 19:55:47.785086    5068 command_runner.go:130] ! I0507 19:54:33.603676       1 naming_controller.go:291] Starting NamingConditionController
	I0507 19:55:47.785086    5068 command_runner.go:130] ! I0507 19:54:33.603772       1 establishing_controller.go:76] Starting EstablishingController
	I0507 19:55:47.785182    5068 command_runner.go:130] ! I0507 19:54:33.603950       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0507 19:55:47.785182    5068 command_runner.go:130] ! I0507 19:54:33.606447       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0507 19:55:47.785182    5068 command_runner.go:130] ! I0507 19:54:33.606495       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0507 19:55:47.785182    5068 command_runner.go:130] ! I0507 19:54:33.617581       1 system_namespaces_controller.go:67] Starting system namespaces controller
	I0507 19:55:47.785251    5068 command_runner.go:130] ! I0507 19:54:33.640887       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0507 19:55:47.785251    5068 command_runner.go:130] ! I0507 19:54:33.641139       1 shared_informer.go:313] Waiting for caches to sync for crd-autoregister
	I0507 19:55:47.785279    5068 command_runner.go:130] ! I0507 19:54:33.700222       1 shared_informer.go:320] Caches are synced for configmaps
	I0507 19:55:47.785279    5068 command_runner.go:130] ! I0507 19:54:33.702782       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0507 19:55:47.785279    5068 command_runner.go:130] ! I0507 19:54:33.702797       1 policy_source.go:224] refreshing policies
	I0507 19:55:47.785348    5068 command_runner.go:130] ! I0507 19:54:33.720688       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0507 19:55:47.785348    5068 command_runner.go:130] ! I0507 19:54:33.721334       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0507 19:55:47.785396    5068 command_runner.go:130] ! I0507 19:54:33.739066       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0507 19:55:47.785396    5068 command_runner.go:130] ! I0507 19:54:33.741686       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0507 19:55:47.785396    5068 command_runner.go:130] ! I0507 19:54:33.742272       1 aggregator.go:165] initial CRD sync complete...
	I0507 19:55:47.785396    5068 command_runner.go:130] ! I0507 19:54:33.742439       1 autoregister_controller.go:141] Starting autoregister controller
	I0507 19:55:47.785396    5068 command_runner.go:130] ! I0507 19:54:33.742581       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0507 19:55:47.785470    5068 command_runner.go:130] ! I0507 19:54:33.742709       1 cache.go:39] Caches are synced for autoregister controller
	I0507 19:55:47.785470    5068 command_runner.go:130] ! I0507 19:54:33.796399       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0507 19:55:47.785495    5068 command_runner.go:130] ! I0507 19:54:33.800122       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0507 19:55:47.785495    5068 command_runner.go:130] ! I0507 19:54:33.800332       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0507 19:55:47.785495    5068 command_runner.go:130] ! I0507 19:54:33.800503       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0507 19:55:47.785495    5068 command_runner.go:130] ! I0507 19:54:33.825705       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0507 19:55:47.785565    5068 command_runner.go:130] ! I0507 19:54:34.607945       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0507 19:55:47.785565    5068 command_runner.go:130] ! W0507 19:54:35.478370       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.19.135.22]
	I0507 19:55:47.785594    5068 command_runner.go:130] ! I0507 19:54:35.480604       1 controller.go:615] quota admission added evaluator for: endpoints
	I0507 19:55:47.785594    5068 command_runner.go:130] ! I0507 19:54:35.493313       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0507 19:55:47.785594    5068 command_runner.go:130] ! I0507 19:54:36.265995       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0507 19:55:47.785594    5068 command_runner.go:130] ! I0507 19:54:36.444774       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0507 19:55:47.785594    5068 command_runner.go:130] ! I0507 19:54:36.460585       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0507 19:55:47.785684    5068 command_runner.go:130] ! I0507 19:54:36.562263       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0507 19:55:47.785684    5068 command_runner.go:130] ! I0507 19:54:36.572917       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0507 19:55:47.793182    5068 logs.go:123] Gathering logs for kube-scheduler [7cefdac2050f] ...
	I0507 19:55:47.793182    5068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 7cefdac2050f"
	I0507 19:55:47.820181    5068 command_runner.go:130] ! I0507 19:33:39.572817       1 serving.go:380] Generated self-signed cert in-memory
	I0507 19:55:47.820181    5068 command_runner.go:130] ! W0507 19:33:41.035488       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0507 19:55:47.820181    5068 command_runner.go:130] ! W0507 19:33:41.035523       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0507 19:55:47.820181    5068 command_runner.go:130] ! W0507 19:33:41.035535       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0507 19:55:47.820181    5068 command_runner.go:130] ! W0507 19:33:41.035542       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0507 19:55:47.820181    5068 command_runner.go:130] ! I0507 19:33:41.100225       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0507 19:55:47.820181    5068 command_runner.go:130] ! I0507 19:33:41.104133       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0507 19:55:47.820181    5068 command_runner.go:130] ! I0507 19:33:41.108249       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0507 19:55:47.820181    5068 command_runner.go:130] ! I0507 19:33:41.108399       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0507 19:55:47.820181    5068 command_runner.go:130] ! I0507 19:33:41.108383       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0507 19:55:47.820181    5068 command_runner.go:130] ! I0507 19:33:41.108658       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0507 19:55:47.820181    5068 command_runner.go:130] ! W0507 19:33:41.115439       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0507 19:55:47.820181    5068 command_runner.go:130] ! E0507 19:33:41.115515       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0507 19:55:47.820181    5068 command_runner.go:130] ! W0507 19:33:41.115737       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0507 19:55:47.820181    5068 command_runner.go:130] ! E0507 19:33:41.115969       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0507 19:55:47.820181    5068 command_runner.go:130] ! W0507 19:33:41.115744       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0507 19:55:47.820181    5068 command_runner.go:130] ! E0507 19:33:41.116415       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0507 19:55:47.820181    5068 command_runner.go:130] ! W0507 19:33:41.116670       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0507 19:55:47.820181    5068 command_runner.go:130] ! E0507 19:33:41.117593       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0507 19:55:47.820181    5068 command_runner.go:130] ! W0507 19:33:41.119709       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0507 19:55:47.820181    5068 command_runner.go:130] ! E0507 19:33:41.120474       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0507 19:55:47.820181    5068 command_runner.go:130] ! W0507 19:33:41.119953       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0507 19:55:47.820181    5068 command_runner.go:130] ! E0507 19:33:41.121523       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0507 19:55:47.820181    5068 command_runner.go:130] ! W0507 19:33:41.120191       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0507 19:55:47.821189    5068 command_runner.go:130] ! W0507 19:33:41.120237       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0507 19:55:47.821189    5068 command_runner.go:130] ! W0507 19:33:41.120278       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0507 19:55:47.821189    5068 command_runner.go:130] ! W0507 19:33:41.120316       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0507 19:55:47.821189    5068 command_runner.go:130] ! W0507 19:33:41.120339       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0507 19:55:47.821189    5068 command_runner.go:130] ! W0507 19:33:41.120384       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0507 19:55:47.821189    5068 command_runner.go:130] ! W0507 19:33:41.120417       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0507 19:55:47.821189    5068 command_runner.go:130] ! W0507 19:33:41.120451       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0507 19:55:47.821189    5068 command_runner.go:130] ! E0507 19:33:41.122419       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0507 19:55:47.821189    5068 command_runner.go:130] ! W0507 19:33:41.123409       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0507 19:55:47.821189    5068 command_runner.go:130] ! E0507 19:33:41.123928       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0507 19:55:47.821189    5068 command_runner.go:130] ! E0507 19:33:41.123939       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0507 19:55:47.821189    5068 command_runner.go:130] ! E0507 19:33:41.123946       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0507 19:55:47.821785    5068 command_runner.go:130] ! E0507 19:33:41.123954       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0507 19:55:47.821785    5068 command_runner.go:130] ! E0507 19:33:41.123963       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0507 19:55:47.821873    5068 command_runner.go:130] ! E0507 19:33:41.124140       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0507 19:55:47.821873    5068 command_runner.go:130] ! E0507 19:33:41.125875       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0507 19:55:47.822037    5068 command_runner.go:130] ! E0507 19:33:41.125886       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0507 19:55:47.822073    5068 command_runner.go:130] ! W0507 19:33:41.948129       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0507 19:55:47.822154    5068 command_runner.go:130] ! E0507 19:33:41.948157       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	I0507 19:55:47.822154    5068 command_runner.go:130] ! W0507 19:33:41.994257       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0507 19:55:47.822237    5068 command_runner.go:130] ! E0507 19:33:41.994824       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	I0507 19:55:47.822303    5068 command_runner.go:130] ! W0507 19:33:42.109252       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0507 19:55:47.822371    5068 command_runner.go:130] ! E0507 19:33:42.109623       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0507 19:55:47.822371    5068 command_runner.go:130] ! W0507 19:33:42.156561       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0507 19:55:47.822439    5068 command_runner.go:130] ! E0507 19:33:42.157128       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0507 19:55:47.822505    5068 command_runner.go:130] ! W0507 19:33:42.162271       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0507 19:55:47.822505    5068 command_runner.go:130] ! E0507 19:33:42.162599       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0507 19:55:47.822574    5068 command_runner.go:130] ! W0507 19:33:42.229371       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0507 19:55:47.822704    5068 command_runner.go:130] ! E0507 19:33:42.229525       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	I0507 19:55:47.822754    5068 command_runner.go:130] ! W0507 19:33:42.264429       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0507 19:55:47.822754    5068 command_runner.go:130] ! E0507 19:33:42.264596       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	I0507 19:55:47.822754    5068 command_runner.go:130] ! W0507 19:33:42.284763       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0507 19:55:47.822754    5068 command_runner.go:130] ! E0507 19:33:42.284872       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0507 19:55:47.822754    5068 command_runner.go:130] ! W0507 19:33:42.338396       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0507 19:55:47.822754    5068 command_runner.go:130] ! E0507 19:33:42.338683       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0507 19:55:47.822754    5068 command_runner.go:130] ! W0507 19:33:42.356861       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0507 19:55:47.822754    5068 command_runner.go:130] ! E0507 19:33:42.356964       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0507 19:55:47.822754    5068 command_runner.go:130] ! W0507 19:33:42.435844       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0507 19:55:47.822754    5068 command_runner.go:130] ! E0507 19:33:42.436739       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	I0507 19:55:47.823281    5068 command_runner.go:130] ! W0507 19:33:42.446379       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0507 19:55:47.823359    5068 command_runner.go:130] ! E0507 19:33:42.446557       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	I0507 19:55:47.823359    5068 command_runner.go:130] ! W0507 19:33:42.489593       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0507 19:55:47.823436    5068 command_runner.go:130] ! E0507 19:33:42.489896       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	I0507 19:55:47.823510    5068 command_runner.go:130] ! W0507 19:33:42.647287       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0507 19:55:47.823510    5068 command_runner.go:130] ! E0507 19:33:42.648065       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	I0507 19:55:47.823585    5068 command_runner.go:130] ! W0507 19:33:42.657928       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0507 19:55:47.823660    5068 command_runner.go:130] ! E0507 19:33:42.658018       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0507 19:55:47.823660    5068 command_runner.go:130] ! I0507 19:33:43.909008       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0507 19:55:47.823733    5068 command_runner.go:130] ! E0507 19:52:16.714078       1 run.go:74] "command failed" err="finished without leader elect"
	I0507 19:55:47.832771    5068 logs.go:123] Gathering logs for container status ...
	I0507 19:55:47.832771    5068 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 19:55:47.888425    5068 command_runner.go:130] > CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	I0507 19:55:47.888425    5068 command_runner.go:130] > 78ecb8cdfd06c       8c811b4aec35f                                                                                         10 seconds ago       Running             busybox                   1                   f8dc35309168f       busybox-fc5497c4f-gcqlv
	I0507 19:55:47.888503    5068 command_runner.go:130] > d27627c198085       cbb01a7bd410d                                                                                         10 seconds ago       Running             coredns                   1                   56c438bec1777       coredns-7db6d8ff4d-5j966
	I0507 19:55:47.888523    5068 command_runner.go:130] > 4c93a69b2eee4       6e38f40d628db                                                                                         32 seconds ago       Running             storage-provisioner       2                   09d2fda974adf       storage-provisioner
	I0507 19:55:47.888523    5068 command_runner.go:130] > 29b5cae0b8f14       4950bb10b3f87                                                                                         About a minute ago   Running             kindnet-cni               1                   857f6b5630910       kindnet-zw4r9
	I0507 19:55:47.888523    5068 command_runner.go:130] > 5255a972ff6ce       a0bf559e280cf                                                                                         About a minute ago   Running             kube-proxy                1                   deb171c003562       kube-proxy-c9gw5
	I0507 19:55:47.888595    5068 command_runner.go:130] > d1e3e4629bc4a       6e38f40d628db                                                                                         About a minute ago   Exited              storage-provisioner       1                   09d2fda974adf       storage-provisioner
	I0507 19:55:47.888595    5068 command_runner.go:130] > 7c95e3addc4b8       c42f13656d0b2                                                                                         About a minute ago   Running             kube-apiserver            0                   fec63580ff266       kube-apiserver-multinode-600000
	I0507 19:55:47.888595    5068 command_runner.go:130] > ac320a872e77c       3861cfcd7c04c                                                                                         About a minute ago   Running             etcd                      0                   c666fba0d0753       etcd-multinode-600000
	I0507 19:55:47.889104    5068 command_runner.go:130] > 922d1e2b87454       c7aad43836fa5                                                                                         About a minute ago   Running             kube-controller-manager   1                   5c37290307d14       kube-controller-manager-multinode-600000
	I0507 19:55:47.889104    5068 command_runner.go:130] > 45341720d5be3       259c8277fcbbc                                                                                         About a minute ago   Running             kube-scheduler            1                   89c8a2313bcaf       kube-scheduler-multinode-600000
	I0507 19:55:47.889104    5068 command_runner.go:130] > 66301c2be7060       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   18 minutes ago       Exited              busybox                   0                   4afb10dc8b115       busybox-fc5497c4f-gcqlv
	I0507 19:55:47.889104    5068 command_runner.go:130] > 9550b237d8d7b       cbb01a7bd410d                                                                                         21 minutes ago       Exited              coredns                   0                   99af61c6e282a       coredns-7db6d8ff4d-5j966
	I0507 19:55:47.889104    5068 command_runner.go:130] > 2d49ad078ed35       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              21 minutes ago       Exited              kindnet-cni               0                   58ebd877d77fb       kindnet-zw4r9
	I0507 19:55:47.889104    5068 command_runner.go:130] > aa9692c1fbd3b       a0bf559e280cf                                                                                         21 minutes ago       Exited              kube-proxy                0                   70cff02905e8f       kube-proxy-c9gw5
	I0507 19:55:47.889104    5068 command_runner.go:130] > 7cefdac2050fa       259c8277fcbbc                                                                                         22 minutes ago       Exited              kube-scheduler            0                   75f27faec2ed6       kube-scheduler-multinode-600000
	I0507 19:55:47.889104    5068 command_runner.go:130] > 3067f16e2e380       c7aad43836fa5                                                                                         22 minutes ago       Exited              kube-controller-manager   0                   af16a92d7c1cc       kube-controller-manager-multinode-600000
	I0507 19:55:47.894786    5068 logs.go:123] Gathering logs for kube-scheduler [45341720d5be] ...
	I0507 19:55:47.894786    5068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 45341720d5be"
	I0507 19:55:47.922495    5068 command_runner.go:130] ! I0507 19:54:30.888703       1 serving.go:380] Generated self-signed cert in-memory
	I0507 19:55:47.923472    5068 command_runner.go:130] ! W0507 19:54:33.652802       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	I0507 19:55:47.923472    5068 command_runner.go:130] ! W0507 19:54:33.652844       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	I0507 19:55:47.923472    5068 command_runner.go:130] ! W0507 19:54:33.652885       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	I0507 19:55:47.923536    5068 command_runner.go:130] ! W0507 19:54:33.652896       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0507 19:55:47.923536    5068 command_runner.go:130] ! I0507 19:54:33.748572       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0507 19:55:47.923536    5068 command_runner.go:130] ! I0507 19:54:33.749371       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0507 19:55:47.923536    5068 command_runner.go:130] ! I0507 19:54:33.757368       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0507 19:55:47.923536    5068 command_runner.go:130] ! I0507 19:54:33.758296       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0507 19:55:47.923536    5068 command_runner.go:130] ! I0507 19:54:33.758449       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0507 19:55:47.923536    5068 command_runner.go:130] ! I0507 19:54:33.759872       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0507 19:55:47.923536    5068 command_runner.go:130] ! I0507 19:54:33.860140       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0507 19:55:47.925470    5068 logs.go:123] Gathering logs for kube-proxy [5255a972ff6c] ...
	I0507 19:55:47.925470    5068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 5255a972ff6c"
	I0507 19:55:47.948264    5068 command_runner.go:130] ! I0507 19:54:35.575583       1 server_linux.go:69] "Using iptables proxy"
	I0507 19:55:47.948326    5068 command_runner.go:130] ! I0507 19:54:35.605564       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.19.135.22"]
	I0507 19:55:47.948326    5068 command_runner.go:130] ! I0507 19:54:35.819515       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0507 19:55:47.948326    5068 command_runner.go:130] ! I0507 19:54:35.819549       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0507 19:55:47.948326    5068 command_runner.go:130] ! I0507 19:54:35.819565       1 server_linux.go:165] "Using iptables Proxier"
	I0507 19:55:47.948326    5068 command_runner.go:130] ! I0507 19:54:35.837879       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0507 19:55:47.948326    5068 command_runner.go:130] ! I0507 19:54:35.838133       1 server.go:872] "Version info" version="v1.30.0"
	I0507 19:55:47.948326    5068 command_runner.go:130] ! I0507 19:54:35.838147       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0507 19:55:47.948326    5068 command_runner.go:130] ! I0507 19:54:35.845888       1 config.go:192] "Starting service config controller"
	I0507 19:55:47.948326    5068 command_runner.go:130] ! I0507 19:54:35.848183       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0507 19:55:47.948326    5068 command_runner.go:130] ! I0507 19:54:35.848226       1 config.go:319] "Starting node config controller"
	I0507 19:55:47.948326    5068 command_runner.go:130] ! I0507 19:54:35.848406       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0507 19:55:47.948326    5068 command_runner.go:130] ! I0507 19:54:35.849079       1 config.go:101] "Starting endpoint slice config controller"
	I0507 19:55:47.948326    5068 command_runner.go:130] ! I0507 19:54:35.849088       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0507 19:55:47.948326    5068 command_runner.go:130] ! I0507 19:54:35.954590       1 shared_informer.go:320] Caches are synced for node config
	I0507 19:55:47.948326    5068 command_runner.go:130] ! I0507 19:54:35.954640       1 shared_informer.go:320] Caches are synced for service config
	I0507 19:55:47.948326    5068 command_runner.go:130] ! I0507 19:54:35.954677       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0507 19:55:47.950276    5068 logs.go:123] Gathering logs for kube-controller-manager [3067f16e2e38] ...
	I0507 19:55:47.950311    5068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 3067f16e2e38"
	I0507 19:55:47.979362    5068 command_runner.go:130] ! I0507 19:33:39.646652       1 serving.go:380] Generated self-signed cert in-memory
	I0507 19:55:47.979433    5068 command_runner.go:130] ! I0507 19:33:40.017908       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0507 19:55:47.979433    5068 command_runner.go:130] ! I0507 19:33:40.018051       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0507 19:55:47.979433    5068 command_runner.go:130] ! I0507 19:33:40.019973       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0507 19:55:47.979433    5068 command_runner.go:130] ! I0507 19:33:40.020228       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0507 19:55:47.979433    5068 command_runner.go:130] ! I0507 19:33:40.023071       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0507 19:55:47.979433    5068 command_runner.go:130] ! I0507 19:33:40.024192       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0507 19:55:47.979433    5068 command_runner.go:130] ! I0507 19:33:44.035484       1 controllermanager.go:759] "Started controller" controller="serviceaccount-token-controller"
	I0507 19:55:47.979433    5068 command_runner.go:130] ! I0507 19:33:44.035669       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0507 19:55:47.979433    5068 command_runner.go:130] ! I0507 19:33:44.062270       1 controllermanager.go:759] "Started controller" controller="pod-garbage-collector-controller"
	I0507 19:55:47.979433    5068 command_runner.go:130] ! I0507 19:33:44.062488       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0507 19:55:47.979433    5068 command_runner.go:130] ! I0507 19:33:44.062501       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0507 19:55:47.979433    5068 command_runner.go:130] ! I0507 19:33:44.082052       1 controllermanager.go:759] "Started controller" controller="serviceaccount-controller"
	I0507 19:55:47.979433    5068 command_runner.go:130] ! I0507 19:33:44.082328       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0507 19:55:47.979433    5068 command_runner.go:130] ! I0507 19:33:44.082342       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0507 19:55:47.979433    5068 command_runner.go:130] ! I0507 19:33:44.097853       1 controllermanager.go:759] "Started controller" controller="daemonset-controller"
	I0507 19:55:47.979433    5068 command_runner.go:130] ! I0507 19:33:44.100760       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0507 19:55:47.979433    5068 command_runner.go:130] ! I0507 19:33:44.101645       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0507 19:55:47.979433    5068 command_runner.go:130] ! I0507 19:33:44.135768       1 shared_informer.go:320] Caches are synced for tokens
	I0507 19:55:47.979954    5068 command_runner.go:130] ! I0507 19:33:44.143316       1 controllermanager.go:759] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0507 19:55:47.979995    5068 command_runner.go:130] ! I0507 19:33:44.143654       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0507 19:55:47.980030    5068 command_runner.go:130] ! I0507 19:33:44.143854       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0507 19:55:47.980030    5068 command_runner.go:130] ! I0507 19:33:44.156569       1 controllermanager.go:759] "Started controller" controller="statefulset-controller"
	I0507 19:55:47.980030    5068 command_runner.go:130] ! I0507 19:33:44.156806       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0507 19:55:47.980030    5068 command_runner.go:130] ! I0507 19:33:44.156821       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0507 19:55:47.980030    5068 command_runner.go:130] ! I0507 19:33:44.193774       1 controllermanager.go:759] "Started controller" controller="bootstrap-signer-controller"
	I0507 19:55:47.980030    5068 command_runner.go:130] ! I0507 19:33:44.194041       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0507 19:55:47.980030    5068 command_runner.go:130] ! I0507 19:33:44.224957       1 controllermanager.go:759] "Started controller" controller="endpointslice-mirroring-controller"
	I0507 19:55:47.980030    5068 command_runner.go:130] ! I0507 19:33:44.225326       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0507 19:55:47.980030    5068 command_runner.go:130] ! I0507 19:33:44.225340       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0507 19:55:47.980030    5068 command_runner.go:130] ! I0507 19:33:44.264579       1 controllermanager.go:759] "Started controller" controller="replicationcontroller-controller"
	I0507 19:55:47.980030    5068 command_runner.go:130] ! I0507 19:33:44.265097       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0507 19:55:47.980030    5068 command_runner.go:130] ! I0507 19:33:44.265116       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0507 19:55:47.980030    5068 command_runner.go:130] ! I0507 19:33:44.287038       1 controllermanager.go:759] "Started controller" controller="persistentvolume-binder-controller"
	I0507 19:55:47.980030    5068 command_runner.go:130] ! I0507 19:33:44.287393       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0507 19:55:47.980030    5068 command_runner.go:130] ! I0507 19:33:44.287436       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0507 19:55:47.980030    5068 command_runner.go:130] ! I0507 19:33:44.356902       1 controllermanager.go:759] "Started controller" controller="ttl-controller"
	I0507 19:55:47.980030    5068 command_runner.go:130] ! I0507 19:33:44.357443       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0507 19:55:47.981248    5068 command_runner.go:130] ! I0507 19:33:44.357459       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0507 19:55:47.981329    5068 command_runner.go:130] ! E0507 19:33:44.380020       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0507 19:55:47.981329    5068 command_runner.go:130] ! I0507 19:33:44.380113       1 controllermanager.go:737] "Warning: skipping controller" controller="service-lb-controller"
	I0507 19:55:47.981329    5068 command_runner.go:130] ! I0507 19:33:44.504313       1 controllermanager.go:759] "Started controller" controller="clusterrole-aggregation-controller"
	I0507 19:55:47.981329    5068 command_runner.go:130] ! I0507 19:33:44.504889       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0507 19:55:47.981329    5068 command_runner.go:130] ! I0507 19:33:44.504939       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0507 19:55:47.981406    5068 command_runner.go:130] ! I0507 19:33:44.642194       1 controllermanager.go:759] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0507 19:55:47.981406    5068 command_runner.go:130] ! I0507 19:33:44.642248       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0507 19:55:47.981406    5068 command_runner.go:130] ! I0507 19:33:44.642259       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0507 19:55:47.981406    5068 command_runner.go:130] ! I0507 19:33:44.952758       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0507 19:55:47.981406    5068 command_runner.go:130] ! I0507 19:33:44.952894       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0507 19:55:47.981406    5068 command_runner.go:130] ! I0507 19:33:44.952916       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0507 19:55:47.981406    5068 command_runner.go:130] ! I0507 19:33:44.952951       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0507 19:55:47.981406    5068 command_runner.go:130] ! I0507 19:33:44.952971       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0507 19:55:47.981406    5068 command_runner.go:130] ! I0507 19:33:44.953093       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0507 19:55:47.981406    5068 command_runner.go:130] ! I0507 19:33:44.953113       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0507 19:55:47.981406    5068 command_runner.go:130] ! I0507 19:33:44.953131       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0507 19:55:47.981406    5068 command_runner.go:130] ! I0507 19:33:44.953150       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0507 19:55:47.981406    5068 command_runner.go:130] ! I0507 19:33:44.953173       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0507 19:55:47.981406    5068 command_runner.go:130] ! I0507 19:33:44.953207       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0507 19:55:47.981406    5068 command_runner.go:130] ! I0507 19:33:44.953385       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0507 19:55:47.981406    5068 command_runner.go:130] ! I0507 19:33:44.953527       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0507 19:55:47.981406    5068 command_runner.go:130] ! I0507 19:33:44.953695       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0507 19:55:47.981406    5068 command_runner.go:130] ! I0507 19:33:44.953874       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0507 19:55:47.981406    5068 command_runner.go:130] ! I0507 19:33:44.954040       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0507 19:55:47.981406    5068 command_runner.go:130] ! I0507 19:33:44.954064       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0507 19:55:47.981406    5068 command_runner.go:130] ! I0507 19:33:44.954206       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0507 19:55:47.981406    5068 command_runner.go:130] ! I0507 19:33:44.954278       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0507 19:55:47.981929    5068 command_runner.go:130] ! I0507 19:33:44.954308       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0507 19:55:47.981969    5068 command_runner.go:130] ! I0507 19:33:44.954374       1 controllermanager.go:759] "Started controller" controller="resourcequota-controller"
	I0507 19:55:47.981969    5068 command_runner.go:130] ! I0507 19:33:44.954592       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0507 19:55:47.982015    5068 command_runner.go:130] ! I0507 19:33:44.954813       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0507 19:55:47.982073    5068 command_runner.go:130] ! I0507 19:33:44.954968       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0507 19:55:47.982073    5068 command_runner.go:130] ! I0507 19:33:44.959507       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0507 19:55:47.982073    5068 command_runner.go:130] ! I0507 19:33:45.092915       1 controllermanager.go:759] "Started controller" controller="deployment-controller"
	I0507 19:55:47.982073    5068 command_runner.go:130] ! I0507 19:33:45.092938       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0507 19:55:47.982073    5068 command_runner.go:130] ! I0507 19:33:45.092974       1 controllermanager.go:737] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0507 19:55:47.982073    5068 command_runner.go:130] ! I0507 19:33:45.093078       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0507 19:55:47.982073    5068 command_runner.go:130] ! I0507 19:33:45.093089       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0507 19:55:47.982073    5068 command_runner.go:130] ! I0507 19:33:45.248481       1 controllermanager.go:759] "Started controller" controller="job-controller"
	I0507 19:55:47.982073    5068 command_runner.go:130] ! I0507 19:33:45.248590       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0507 19:55:47.982073    5068 command_runner.go:130] ! I0507 19:33:45.248600       1 shared_informer.go:313] Waiting for caches to sync for job
	I0507 19:55:47.982073    5068 command_runner.go:130] ! I0507 19:33:45.403516       1 controllermanager.go:759] "Started controller" controller="persistentvolume-protection-controller"
	I0507 19:55:47.982073    5068 command_runner.go:130] ! I0507 19:33:45.403864       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0507 19:55:47.982073    5068 command_runner.go:130] ! I0507 19:33:45.404124       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0507 19:55:47.982073    5068 command_runner.go:130] ! I0507 19:33:45.547079       1 controllermanager.go:759] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0507 19:55:47.982073    5068 command_runner.go:130] ! I0507 19:33:45.547101       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0507 19:55:47.982073    5068 command_runner.go:130] ! I0507 19:33:45.547218       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0507 19:55:47.982073    5068 command_runner.go:130] ! I0507 19:33:45.547228       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0507 19:55:47.982073    5068 command_runner.go:130] ! I0507 19:33:45.695293       1 controllermanager.go:759] "Started controller" controller="cronjob-controller"
	I0507 19:55:47.982073    5068 command_runner.go:130] ! I0507 19:33:45.695376       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0507 19:55:47.982073    5068 command_runner.go:130] ! I0507 19:33:45.695385       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0507 19:55:47.982073    5068 command_runner.go:130] ! I0507 19:33:45.842519       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0507 19:55:47.982073    5068 command_runner.go:130] ! I0507 19:33:45.843201       1 controllermanager.go:759] "Started controller" controller="node-lifecycle-controller"
	I0507 19:55:47.982073    5068 command_runner.go:130] ! I0507 19:33:45.843464       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0507 19:55:47.982073    5068 command_runner.go:130] ! I0507 19:33:45.843612       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0507 19:55:47.982073    5068 command_runner.go:130] ! I0507 19:33:45.843670       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0507 19:55:47.982073    5068 command_runner.go:130] ! I0507 19:33:45.994121       1 controllermanager.go:759] "Started controller" controller="persistentvolume-expander-controller"
	I0507 19:55:47.982073    5068 command_runner.go:130] ! I0507 19:33:45.994195       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0507 19:55:47.982073    5068 command_runner.go:130] ! I0507 19:33:45.994559       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0507 19:55:47.982073    5068 command_runner.go:130] ! I0507 19:33:46.142670       1 controllermanager.go:759] "Started controller" controller="ephemeral-volume-controller"
	I0507 19:55:47.982073    5068 command_runner.go:130] ! I0507 19:33:46.142767       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0507 19:55:47.982595    5068 command_runner.go:130] ! I0507 19:33:46.142777       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0507 19:55:47.982595    5068 command_runner.go:130] ! I0507 19:33:46.292842       1 controllermanager.go:759] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0507 19:55:47.982636    5068 command_runner.go:130] ! I0507 19:33:46.292937       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0507 19:55:47.982670    5068 command_runner.go:130] ! I0507 19:33:46.292979       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0507 19:55:47.982670    5068 command_runner.go:130] ! I0507 19:33:46.293532       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0507 19:55:47.982670    5068 command_runner.go:130] ! I0507 19:33:46.443522       1 controllermanager.go:759] "Started controller" controller="endpoints-controller"
	I0507 19:55:47.982670    5068 command_runner.go:130] ! I0507 19:33:46.443783       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0507 19:55:47.982670    5068 command_runner.go:130] ! I0507 19:33:46.443796       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0507 19:55:47.982670    5068 command_runner.go:130] ! I0507 19:33:46.639478       1 controllermanager.go:759] "Started controller" controller="disruption-controller"
	I0507 19:55:47.982670    5068 command_runner.go:130] ! I0507 19:33:46.639695       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0507 19:55:47.982670    5068 command_runner.go:130] ! I0507 19:33:46.640237       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0507 19:55:47.982670    5068 command_runner.go:130] ! I0507 19:33:46.640384       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0507 19:55:47.982670    5068 command_runner.go:130] ! I0507 19:33:46.802195       1 controllermanager.go:759] "Started controller" controller="ttl-after-finished-controller"
	I0507 19:55:47.982670    5068 command_runner.go:130] ! I0507 19:33:46.802321       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0507 19:55:47.982670    5068 command_runner.go:130] ! I0507 19:33:46.802333       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0507 19:55:47.982670    5068 command_runner.go:130] ! I0507 19:33:46.839302       1 controllermanager.go:759] "Started controller" controller="taint-eviction-controller"
	I0507 19:55:47.982670    5068 command_runner.go:130] ! I0507 19:33:46.839419       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0507 19:55:47.982670    5068 command_runner.go:130] ! I0507 19:33:46.839439       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0507 19:55:47.982670    5068 command_runner.go:130] ! I0507 19:33:46.839547       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0507 19:55:47.982670    5068 command_runner.go:130] ! I0507 19:33:46.995880       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0507 19:55:47.982670    5068 command_runner.go:130] ! I0507 19:33:46.996105       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0507 19:55:47.982670    5068 command_runner.go:130] ! I0507 19:33:46.996124       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0507 19:55:47.982670    5068 command_runner.go:130] ! I0507 19:33:46.996192       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0507 19:55:47.982670    5068 command_runner.go:130] ! I0507 19:33:46.996213       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0507 19:55:47.982670    5068 command_runner.go:130] ! I0507 19:33:46.996264       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0507 19:55:47.982670    5068 command_runner.go:130] ! I0507 19:33:46.996515       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0507 19:55:47.982670    5068 command_runner.go:130] ! I0507 19:33:46.997757       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0507 19:55:47.983189    5068 command_runner.go:130] ! I0507 19:33:46.997789       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0507 19:55:47.983227    5068 command_runner.go:130] ! I0507 19:33:46.998232       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0507 19:55:47.983227    5068 command_runner.go:130] ! I0507 19:33:46.998256       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0507 19:55:47.983261    5068 command_runner.go:130] ! I0507 19:33:46.998461       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0507 19:55:47.983261    5068 command_runner.go:130] ! I0507 19:33:46.998581       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0507 19:55:47.983261    5068 command_runner.go:130] ! I0507 19:33:47.144659       1 controllermanager.go:759] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0507 19:55:47.983261    5068 command_runner.go:130] ! I0507 19:33:47.144787       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0507 19:55:47.983261    5068 command_runner.go:130] ! I0507 19:33:47.144840       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0507 19:55:47.983261    5068 command_runner.go:130] ! I0507 19:33:47.188132       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0507 19:55:47.983261    5068 command_runner.go:130] ! I0507 19:33:47.188178       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0507 19:55:47.983261    5068 command_runner.go:130] ! I0507 19:33:47.188191       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0507 19:55:47.983261    5068 command_runner.go:130] ! I0507 19:33:47.238083       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0507 19:55:47.983261    5068 command_runner.go:130] ! I0507 19:33:47.238123       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0507 19:55:47.983261    5068 command_runner.go:130] ! I0507 19:33:47.394585       1 controllermanager.go:759] "Started controller" controller="token-cleaner-controller"
	I0507 19:55:47.983261    5068 command_runner.go:130] ! I0507 19:33:47.394777       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0507 19:55:47.983261    5068 command_runner.go:130] ! I0507 19:33:47.394803       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0507 19:55:47.983261    5068 command_runner.go:130] ! I0507 19:33:47.394838       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0507 19:55:47.983261    5068 command_runner.go:130] ! I0507 19:33:57.452785       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0507 19:55:47.983261    5068 command_runner.go:130] ! I0507 19:33:57.452897       1 controllermanager.go:759] "Started controller" controller="node-ipam-controller"
	I0507 19:55:47.983261    5068 command_runner.go:130] ! I0507 19:33:57.453626       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0507 19:55:47.983261    5068 command_runner.go:130] ! I0507 19:33:57.453826       1 shared_informer.go:313] Waiting for caches to sync for node
	I0507 19:55:47.983261    5068 command_runner.go:130] ! I0507 19:33:57.483145       1 controllermanager.go:759] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0507 19:55:47.983261    5068 command_runner.go:130] ! I0507 19:33:57.483422       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0507 19:55:47.983261    5068 command_runner.go:130] ! I0507 19:33:57.493863       1 controllermanager.go:759] "Started controller" controller="endpointslice-controller"
	I0507 19:55:47.983261    5068 command_runner.go:130] ! I0507 19:33:57.494296       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0507 19:55:47.983261    5068 command_runner.go:130] ! I0507 19:33:57.494585       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0507 19:55:47.983261    5068 command_runner.go:130] ! I0507 19:33:57.506181       1 controllermanager.go:759] "Started controller" controller="replicaset-controller"
	I0507 19:55:47.983261    5068 command_runner.go:130] ! I0507 19:33:57.506211       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0507 19:55:47.983261    5068 command_runner.go:130] ! I0507 19:33:57.506219       1 controllermanager.go:737] "Warning: skipping controller" controller="node-route-controller"
	I0507 19:55:47.983261    5068 command_runner.go:130] ! I0507 19:33:57.506448       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0507 19:55:47.983261    5068 command_runner.go:130] ! I0507 19:33:57.506471       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0507 19:55:47.983261    5068 command_runner.go:130] ! E0507 19:33:57.508667       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0507 19:55:47.983261    5068 command_runner.go:130] ! I0507 19:33:57.508863       1 controllermanager.go:737] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0507 19:55:47.983782    5068 command_runner.go:130] ! I0507 19:33:57.536071       1 controllermanager.go:759] "Started controller" controller="namespace-controller"
	I0507 19:55:47.983858    5068 command_runner.go:130] ! I0507 19:33:57.536238       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0507 19:55:47.983895    5068 command_runner.go:130] ! I0507 19:33:57.536958       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0507 19:55:47.983927    5068 command_runner.go:130] ! I0507 19:33:57.552316       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0507 19:55:47.983927    5068 command_runner.go:130] ! I0507 19:33:57.552368       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0507 19:55:47.983963    5068 command_runner.go:130] ! I0507 19:33:57.552583       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0507 19:55:47.983963    5068 command_runner.go:130] ! I0507 19:33:57.552830       1 controllermanager.go:759] "Started controller" controller="garbage-collector-controller"
	I0507 19:55:47.983963    5068 command_runner.go:130] ! I0507 19:33:57.602799       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0507 19:55:47.983963    5068 command_runner.go:130] ! I0507 19:33:57.604255       1 shared_informer.go:320] Caches are synced for expand
	I0507 19:55:47.983963    5068 command_runner.go:130] ! I0507 19:33:57.604567       1 shared_informer.go:320] Caches are synced for cronjob
	I0507 19:55:47.983963    5068 command_runner.go:130] ! I0507 19:33:57.604710       1 shared_informer.go:320] Caches are synced for PV protection
	I0507 19:55:47.983963    5068 command_runner.go:130] ! I0507 19:33:57.616713       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-600000\" does not exist"
	I0507 19:55:47.983963    5068 command_runner.go:130] ! I0507 19:33:57.620217       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0507 19:55:47.983963    5068 command_runner.go:130] ! I0507 19:33:57.625534       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0507 19:55:47.983963    5068 command_runner.go:130] ! I0507 19:33:57.637418       1 shared_informer.go:320] Caches are synced for namespace
	I0507 19:55:47.983963    5068 command_runner.go:130] ! I0507 19:33:57.640979       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0507 19:55:47.983963    5068 command_runner.go:130] ! I0507 19:33:57.643690       1 shared_informer.go:320] Caches are synced for ephemeral
	I0507 19:55:47.983963    5068 command_runner.go:130] ! I0507 19:33:57.643962       1 shared_informer.go:320] Caches are synced for crt configmap
	I0507 19:55:47.983963    5068 command_runner.go:130] ! I0507 19:33:57.643944       1 shared_informer.go:320] Caches are synced for endpoint
	I0507 19:55:47.983963    5068 command_runner.go:130] ! I0507 19:33:57.645645       1 shared_informer.go:320] Caches are synced for PVC protection
	I0507 19:55:47.983963    5068 command_runner.go:130] ! I0507 19:33:57.650051       1 shared_informer.go:320] Caches are synced for job
	I0507 19:55:47.983963    5068 command_runner.go:130] ! I0507 19:33:57.654615       1 shared_informer.go:320] Caches are synced for node
	I0507 19:55:47.983963    5068 command_runner.go:130] ! I0507 19:33:57.654828       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0507 19:55:47.983963    5068 command_runner.go:130] ! I0507 19:33:57.654976       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0507 19:55:47.983963    5068 command_runner.go:130] ! I0507 19:33:57.658548       1 shared_informer.go:320] Caches are synced for stateful set
	I0507 19:55:47.983963    5068 command_runner.go:130] ! I0507 19:33:57.658557       1 shared_informer.go:320] Caches are synced for TTL
	I0507 19:55:47.983963    5068 command_runner.go:130] ! I0507 19:33:57.658578       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0507 19:55:47.983963    5068 command_runner.go:130] ! I0507 19:33:57.660814       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0507 19:55:47.983963    5068 command_runner.go:130] ! I0507 19:33:57.662570       1 shared_informer.go:320] Caches are synced for GC
	I0507 19:55:47.983963    5068 command_runner.go:130] ! I0507 19:33:57.666627       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0507 19:55:47.983963    5068 command_runner.go:130] ! I0507 19:33:57.682592       1 shared_informer.go:320] Caches are synced for service account
	I0507 19:55:47.983963    5068 command_runner.go:130] ! I0507 19:33:57.683797       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0507 19:55:47.983963    5068 command_runner.go:130] ! I0507 19:33:57.686866       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-600000" podCIDRs=["10.244.0.0/24"]
	I0507 19:55:47.983963    5068 command_runner.go:130] ! I0507 19:33:57.688271       1 shared_informer.go:320] Caches are synced for persistent volume
	I0507 19:55:47.983963    5068 command_runner.go:130] ! I0507 19:33:57.688450       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0507 19:55:47.983963    5068 command_runner.go:130] ! I0507 19:33:57.693833       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0507 19:55:47.983963    5068 command_runner.go:130] ! I0507 19:33:57.695065       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0507 19:55:47.984484    5068 command_runner.go:130] ! I0507 19:33:57.696405       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0507 19:55:47.984484    5068 command_runner.go:130] ! I0507 19:33:57.696588       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0507 19:55:47.984523    5068 command_runner.go:130] ! I0507 19:33:57.699644       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0507 19:55:47.984523    5068 command_runner.go:130] ! I0507 19:33:57.700059       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0507 19:55:47.984585    5068 command_runner.go:130] ! I0507 19:33:57.700324       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0507 19:55:47.984585    5068 command_runner.go:130] ! I0507 19:33:57.703629       1 shared_informer.go:320] Caches are synced for daemon sets
	I0507 19:55:47.984585    5068 command_runner.go:130] ! I0507 19:33:57.710906       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0507 19:55:47.984634    5068 command_runner.go:130] ! I0507 19:33:57.744541       1 shared_informer.go:320] Caches are synced for HPA
	I0507 19:55:47.984656    5068 command_runner.go:130] ! I0507 19:33:57.744580       1 shared_informer.go:320] Caches are synced for taint
	I0507 19:55:47.984656    5068 command_runner.go:130] ! I0507 19:33:57.744652       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0507 19:55:47.984703    5068 command_runner.go:130] ! I0507 19:33:57.744737       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-600000"
	I0507 19:55:47.984735    5068 command_runner.go:130] ! I0507 19:33:57.744768       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0507 19:55:47.984735    5068 command_runner.go:130] ! I0507 19:33:57.764904       1 shared_informer.go:320] Caches are synced for resource quota
	I0507 19:55:47.984786    5068 command_runner.go:130] ! I0507 19:33:57.793156       1 shared_informer.go:320] Caches are synced for deployment
	I0507 19:55:47.984786    5068 command_runner.go:130] ! I0507 19:33:57.806522       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0507 19:55:47.984786    5068 command_runner.go:130] ! I0507 19:33:57.841338       1 shared_informer.go:320] Caches are synced for disruption
	I0507 19:55:47.984786    5068 command_runner.go:130] ! I0507 19:33:57.848178       1 shared_informer.go:320] Caches are synced for attach detach
	I0507 19:55:47.984786    5068 command_runner.go:130] ! I0507 19:33:57.857076       1 shared_informer.go:320] Caches are synced for resource quota
	I0507 19:55:47.984786    5068 command_runner.go:130] ! I0507 19:33:58.320735       1 shared_informer.go:320] Caches are synced for garbage collector
	I0507 19:55:47.984786    5068 command_runner.go:130] ! I0507 19:33:58.353360       1 shared_informer.go:320] Caches are synced for garbage collector
	I0507 19:55:47.984786    5068 command_runner.go:130] ! I0507 19:33:58.353634       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0507 19:55:47.984786    5068 command_runner.go:130] ! I0507 19:33:58.648491       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="254.239192ms"
	I0507 19:55:47.984786    5068 command_runner.go:130] ! I0507 19:33:58.768889       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="120.227252ms"
	I0507 19:55:47.984786    5068 command_runner.go:130] ! I0507 19:33:58.768980       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="57.703µs"
	I0507 19:55:47.984786    5068 command_runner.go:130] ! I0507 19:33:59.385629       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="74.4593ms"
	I0507 19:55:47.984786    5068 command_runner.go:130] ! I0507 19:33:59.400563       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="14.850657ms"
	I0507 19:55:47.984786    5068 command_runner.go:130] ! I0507 19:33:59.442803       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="42.020809ms"
	I0507 19:55:47.984786    5068 command_runner.go:130] ! I0507 19:33:59.442937       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="66.204µs"
	I0507 19:55:47.984786    5068 command_runner.go:130] ! I0507 19:34:10.730717       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="75.405µs"
	I0507 19:55:47.984786    5068 command_runner.go:130] ! I0507 19:34:10.778543       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="100.807µs"
	I0507 19:55:47.984786    5068 command_runner.go:130] ! I0507 19:34:12.746728       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0507 19:55:47.984786    5068 command_runner.go:130] ! I0507 19:34:12.843910       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="71.905µs"
	I0507 19:55:47.984786    5068 command_runner.go:130] ! I0507 19:34:12.916087       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="21.128233ms"
	I0507 19:55:47.984786    5068 command_runner.go:130] ! I0507 19:34:12.920189       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="131.008µs"
	I0507 19:55:47.984786    5068 command_runner.go:130] ! I0507 19:36:39.748714       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-600000-m02\" does not exist"
	I0507 19:55:47.984786    5068 command_runner.go:130] ! I0507 19:36:39.768095       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-600000-m02" podCIDRs=["10.244.1.0/24"]
	I0507 19:55:47.984786    5068 command_runner.go:130] ! I0507 19:36:42.771386       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-600000-m02"
	I0507 19:55:47.984786    5068 command_runner.go:130] ! I0507 19:36:59.833069       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-600000-m02"
	I0507 19:55:47.984786    5068 command_runner.go:130] ! I0507 19:37:23.261574       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="80.822997ms"
	I0507 19:55:47.984786    5068 command_runner.go:130] ! I0507 19:37:23.275925       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.242181ms"
	I0507 19:55:47.984786    5068 command_runner.go:130] ! I0507 19:37:23.277411       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.303µs"
	I0507 19:55:47.984786    5068 command_runner.go:130] ! I0507 19:37:25.468822       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.984518ms"
	I0507 19:55:47.984786    5068 command_runner.go:130] ! I0507 19:37:25.471412       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="2.381856ms"
	I0507 19:55:47.984786    5068 command_runner.go:130] ! I0507 19:37:26.028543       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.755438ms"
	I0507 19:55:47.984786    5068 command_runner.go:130] ! I0507 19:37:26.029180       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="91.706µs"
	I0507 19:55:47.984786    5068 command_runner.go:130] ! I0507 19:40:53.034791       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-600000-m02"
	I0507 19:55:47.984786    5068 command_runner.go:130] ! I0507 19:40:53.035911       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-600000-m03\" does not exist"
	I0507 19:55:47.984786    5068 command_runner.go:130] ! I0507 19:40:53.048242       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-600000-m03" podCIDRs=["10.244.2.0/24"]
	I0507 19:55:47.984786    5068 command_runner.go:130] ! I0507 19:40:57.837925       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-600000-m03"
	I0507 19:55:47.985579    5068 command_runner.go:130] ! I0507 19:41:13.622605       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-600000-m02"
	I0507 19:55:47.985625    5068 command_runner.go:130] ! I0507 19:48:02.948548       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-600000-m02"
	I0507 19:55:47.985625    5068 command_runner.go:130] ! I0507 19:50:20.695158       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-600000-m02"
	I0507 19:55:47.985686    5068 command_runner.go:130] ! I0507 19:50:25.866050       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-600000-m03\" does not exist"
	I0507 19:55:47.985764    5068 command_runner.go:130] ! I0507 19:50:25.866126       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-600000-m02"
	I0507 19:55:47.985764    5068 command_runner.go:130] ! I0507 19:50:25.887459       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-600000-m03" podCIDRs=["10.244.3.0/24"]
	I0507 19:55:47.985764    5068 command_runner.go:130] ! I0507 19:50:31.631900       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-600000-m02"
	I0507 19:55:47.985840    5068 command_runner.go:130] ! I0507 19:51:58.074557       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-600000-m02"
	I0507 19:55:48.001075    5068 logs.go:123] Gathering logs for kindnet [2d49ad078ed3] ...
	I0507 19:55:48.001075    5068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2d49ad078ed3"
	I0507 19:55:48.026577    5068 command_runner.go:130] ! I0507 19:41:07.116810       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:48.026577    5068 command_runner.go:130] ! I0507 19:41:07.116911       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:48.026577    5068 command_runner.go:130] ! I0507 19:41:07.117095       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:48.026577    5068 command_runner.go:130] ! I0507 19:41:17.123472       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:48.026577    5068 command_runner.go:130] ! I0507 19:41:17.123573       1 main.go:227] handling current node
	I0507 19:55:48.026577    5068 command_runner.go:130] ! I0507 19:41:17.123585       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:48.026577    5068 command_runner.go:130] ! I0507 19:41:17.123594       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:48.026577    5068 command_runner.go:130] ! I0507 19:41:17.124084       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:48.026577    5068 command_runner.go:130] ! I0507 19:41:17.124175       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:48.026577    5068 command_runner.go:130] ! I0507 19:41:27.134971       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:48.026577    5068 command_runner.go:130] ! I0507 19:41:27.135112       1 main.go:227] handling current node
	I0507 19:55:48.026577    5068 command_runner.go:130] ! I0507 19:41:27.135127       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:48.026577    5068 command_runner.go:130] ! I0507 19:41:27.135135       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:48.026577    5068 command_runner.go:130] ! I0507 19:41:27.135337       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:48.026577    5068 command_runner.go:130] ! I0507 19:41:27.135391       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:48.026577    5068 command_runner.go:130] ! I0507 19:41:37.144428       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:48.026577    5068 command_runner.go:130] ! I0507 19:41:37.144529       1 main.go:227] handling current node
	I0507 19:55:48.026577    5068 command_runner.go:130] ! I0507 19:41:37.144541       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:48.026577    5068 command_runner.go:130] ! I0507 19:41:37.144549       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:48.026577    5068 command_runner.go:130] ! I0507 19:41:37.144673       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:48.026577    5068 command_runner.go:130] ! I0507 19:41:37.144698       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:48.026577    5068 command_runner.go:130] ! I0507 19:41:47.154405       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:48.026577    5068 command_runner.go:130] ! I0507 19:41:47.154529       1 main.go:227] handling current node
	I0507 19:55:48.026577    5068 command_runner.go:130] ! I0507 19:41:47.154543       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:48.026577    5068 command_runner.go:130] ! I0507 19:41:47.154551       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:48.027100    5068 command_runner.go:130] ! I0507 19:41:47.155068       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:48.027140    5068 command_runner.go:130] ! I0507 19:41:47.155088       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:48.027140    5068 command_runner.go:130] ! I0507 19:41:57.163844       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:48.027140    5068 command_runner.go:130] ! I0507 19:41:57.163910       1 main.go:227] handling current node
	I0507 19:55:48.027140    5068 command_runner.go:130] ! I0507 19:41:57.163920       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:48.027140    5068 command_runner.go:130] ! I0507 19:41:57.163926       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:48.027216    5068 command_runner.go:130] ! I0507 19:41:57.164061       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:48.027254    5068 command_runner.go:130] ! I0507 19:41:57.164070       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:48.027254    5068 command_runner.go:130] ! I0507 19:42:07.179518       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:48.027254    5068 command_runner.go:130] ! I0507 19:42:07.179623       1 main.go:227] handling current node
	I0507 19:55:48.027299    5068 command_runner.go:130] ! I0507 19:42:07.179635       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:48.027299    5068 command_runner.go:130] ! I0507 19:42:07.179643       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:48.027336    5068 command_runner.go:130] ! I0507 19:42:07.179805       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:48.027336    5068 command_runner.go:130] ! I0507 19:42:07.180030       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:48.027336    5068 command_runner.go:130] ! I0507 19:42:17.193528       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:48.027336    5068 command_runner.go:130] ! I0507 19:42:17.193636       1 main.go:227] handling current node
	I0507 19:55:48.027336    5068 command_runner.go:130] ! I0507 19:42:17.193649       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:48.027445    5068 command_runner.go:130] ! I0507 19:42:17.193657       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:48.027445    5068 command_runner.go:130] ! I0507 19:42:17.194171       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:48.027501    5068 command_runner.go:130] ! I0507 19:42:17.194408       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:48.027535    5068 command_runner.go:130] ! I0507 19:42:27.205877       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:48.027567    5068 command_runner.go:130] ! I0507 19:42:27.205918       1 main.go:227] handling current node
	I0507 19:55:48.027602    5068 command_runner.go:130] ! I0507 19:42:27.205929       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:48.027602    5068 command_runner.go:130] ! I0507 19:42:27.205936       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:48.027602    5068 command_runner.go:130] ! I0507 19:42:27.206343       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:48.027643    5068 command_runner.go:130] ! I0507 19:42:27.206360       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:48.027657    5068 command_runner.go:130] ! I0507 19:42:37.213680       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:48.027709    5068 command_runner.go:130] ! I0507 19:42:37.213766       1 main.go:227] handling current node
	I0507 19:55:48.027744    5068 command_runner.go:130] ! I0507 19:42:37.213780       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:48.027744    5068 command_runner.go:130] ! I0507 19:42:37.213788       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:48.027776    5068 command_runner.go:130] ! I0507 19:42:37.214204       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:48.027798    5068 command_runner.go:130] ! I0507 19:42:37.214303       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:48.027798    5068 command_runner.go:130] ! I0507 19:42:47.224946       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:48.027798    5068 command_runner.go:130] ! I0507 19:42:47.225125       1 main.go:227] handling current node
	I0507 19:55:48.027857    5068 command_runner.go:130] ! I0507 19:42:47.225139       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:48.027857    5068 command_runner.go:130] ! I0507 19:42:47.225148       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:48.027857    5068 command_runner.go:130] ! I0507 19:42:47.225499       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:48.027897    5068 command_runner.go:130] ! I0507 19:42:47.225556       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:48.027932    5068 command_runner.go:130] ! I0507 19:42:57.236504       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:48.027964    5068 command_runner.go:130] ! I0507 19:42:57.236681       1 main.go:227] handling current node
	I0507 19:55:48.027992    5068 command_runner.go:130] ! I0507 19:42:57.236699       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:48.027992    5068 command_runner.go:130] ! I0507 19:42:57.237025       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:48.027992    5068 command_runner.go:130] ! I0507 19:42:57.237359       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:48.027992    5068 command_runner.go:130] ! I0507 19:42:57.237385       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:48.027992    5068 command_runner.go:130] ! I0507 19:43:07.248420       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:48.027992    5068 command_runner.go:130] ! I0507 19:43:07.248600       1 main.go:227] handling current node
	I0507 19:55:48.027992    5068 command_runner.go:130] ! I0507 19:43:07.248614       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:48.027992    5068 command_runner.go:130] ! I0507 19:43:07.248622       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:48.027992    5068 command_runner.go:130] ! I0507 19:43:07.249108       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:48.027992    5068 command_runner.go:130] ! I0507 19:43:07.249189       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:48.027992    5068 command_runner.go:130] ! I0507 19:43:17.265021       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:48.027992    5068 command_runner.go:130] ! I0507 19:43:17.265056       1 main.go:227] handling current node
	I0507 19:55:48.027992    5068 command_runner.go:130] ! I0507 19:43:17.265067       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:48.027992    5068 command_runner.go:130] ! I0507 19:43:17.265074       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:48.027992    5068 command_runner.go:130] ! I0507 19:43:17.265713       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:48.027992    5068 command_runner.go:130] ! I0507 19:43:17.265780       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:48.027992    5068 command_runner.go:130] ! I0507 19:43:27.271270       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:48.027992    5068 command_runner.go:130] ! I0507 19:43:27.271308       1 main.go:227] handling current node
	I0507 19:55:48.027992    5068 command_runner.go:130] ! I0507 19:43:27.271320       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:48.027992    5068 command_runner.go:130] ! I0507 19:43:27.271326       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:48.027992    5068 command_runner.go:130] ! I0507 19:43:27.271684       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:48.027992    5068 command_runner.go:130] ! I0507 19:43:27.271715       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:48.027992    5068 command_runner.go:130] ! I0507 19:43:37.279223       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:48.027992    5068 command_runner.go:130] ! I0507 19:43:37.279323       1 main.go:227] handling current node
	I0507 19:55:48.027992    5068 command_runner.go:130] ! I0507 19:43:37.279336       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:48.027992    5068 command_runner.go:130] ! I0507 19:43:37.279344       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:48.027992    5068 command_runner.go:130] ! I0507 19:43:37.279894       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:48.027992    5068 command_runner.go:130] ! I0507 19:43:37.280039       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:48.027992    5068 command_runner.go:130] ! I0507 19:43:47.292160       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:48.027992    5068 command_runner.go:130] ! I0507 19:43:47.292257       1 main.go:227] handling current node
	I0507 19:55:48.027992    5068 command_runner.go:130] ! I0507 19:43:47.292269       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:48.027992    5068 command_runner.go:130] ! I0507 19:43:47.292276       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:48.027992    5068 command_runner.go:130] ! I0507 19:43:47.292451       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:48.027992    5068 command_runner.go:130] ! I0507 19:43:47.292531       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:48.027992    5068 command_runner.go:130] ! I0507 19:43:57.302957       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:48.027992    5068 command_runner.go:130] ! I0507 19:43:57.303129       1 main.go:227] handling current node
	I0507 19:55:48.027992    5068 command_runner.go:130] ! I0507 19:43:57.303144       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:48.027992    5068 command_runner.go:130] ! I0507 19:43:57.303152       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:48.028519    5068 command_runner.go:130] ! I0507 19:43:57.303598       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:48.028519    5068 command_runner.go:130] ! I0507 19:43:57.303754       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:48.028519    5068 command_runner.go:130] ! I0507 19:44:07.314533       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:48.028572    5068 command_runner.go:130] ! I0507 19:44:07.314565       1 main.go:227] handling current node
	I0507 19:55:48.028572    5068 command_runner.go:130] ! I0507 19:44:07.314575       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:48.028572    5068 command_runner.go:130] ! I0507 19:44:07.314581       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:48.028572    5068 command_runner.go:130] ! I0507 19:44:07.314878       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:48.028572    5068 command_runner.go:130] ! I0507 19:44:07.314965       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:48.028639    5068 command_runner.go:130] ! I0507 19:44:17.330535       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:48.028639    5068 command_runner.go:130] ! I0507 19:44:17.330644       1 main.go:227] handling current node
	I0507 19:55:48.028639    5068 command_runner.go:130] ! I0507 19:44:17.330657       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:48.028713    5068 command_runner.go:130] ! I0507 19:44:17.330665       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:48.028713    5068 command_runner.go:130] ! I0507 19:44:17.330781       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:48.028746    5068 command_runner.go:130] ! I0507 19:44:17.330805       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:48.028773    5068 command_runner.go:130] ! I0507 19:44:27.345226       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:48.028773    5068 command_runner.go:130] ! I0507 19:44:27.345325       1 main.go:227] handling current node
	I0507 19:55:48.028773    5068 command_runner.go:130] ! I0507 19:44:27.345338       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:48.028773    5068 command_runner.go:130] ! I0507 19:44:27.345346       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:48.028773    5068 command_runner.go:130] ! I0507 19:44:27.345594       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:48.028773    5068 command_runner.go:130] ! I0507 19:44:27.345661       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:48.028773    5068 command_runner.go:130] ! I0507 19:44:37.358952       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:48.028773    5068 command_runner.go:130] ! I0507 19:44:37.359029       1 main.go:227] handling current node
	I0507 19:55:48.028773    5068 command_runner.go:130] ! I0507 19:44:37.359041       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:48.028773    5068 command_runner.go:130] ! I0507 19:44:37.359049       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:48.028773    5068 command_runner.go:130] ! I0507 19:44:37.359583       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:48.028773    5068 command_runner.go:130] ! I0507 19:44:37.359942       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:48.028773    5068 command_runner.go:130] ! I0507 19:44:47.372236       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:48.028773    5068 command_runner.go:130] ! I0507 19:44:47.372327       1 main.go:227] handling current node
	I0507 19:55:48.028773    5068 command_runner.go:130] ! I0507 19:44:47.372340       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:48.028773    5068 command_runner.go:130] ! I0507 19:44:47.372347       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:48.028773    5068 command_runner.go:130] ! I0507 19:44:47.372619       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:48.028773    5068 command_runner.go:130] ! I0507 19:44:47.372773       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:48.028773    5068 command_runner.go:130] ! I0507 19:44:57.381408       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:48.028773    5068 command_runner.go:130] ! I0507 19:44:57.381561       1 main.go:227] handling current node
	I0507 19:55:48.028773    5068 command_runner.go:130] ! I0507 19:44:57.381575       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:48.028773    5068 command_runner.go:130] ! I0507 19:44:57.381583       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:48.028773    5068 command_runner.go:130] ! I0507 19:44:57.388779       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:48.028773    5068 command_runner.go:130] ! I0507 19:44:57.388820       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:48.028773    5068 command_runner.go:130] ! I0507 19:45:07.401501       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:48.028773    5068 command_runner.go:130] ! I0507 19:45:07.401539       1 main.go:227] handling current node
	I0507 19:55:48.028773    5068 command_runner.go:130] ! I0507 19:45:07.401551       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:48.028773    5068 command_runner.go:130] ! I0507 19:45:07.401558       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:48.028773    5068 command_runner.go:130] ! I0507 19:45:07.401946       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:48.028773    5068 command_runner.go:130] ! I0507 19:45:07.401971       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:48.028773    5068 command_runner.go:130] ! I0507 19:45:17.412152       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:48.028773    5068 command_runner.go:130] ! I0507 19:45:17.412194       1 main.go:227] handling current node
	I0507 19:55:48.028773    5068 command_runner.go:130] ! I0507 19:45:17.412205       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:48.028773    5068 command_runner.go:130] ! I0507 19:45:17.412546       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:48.028773    5068 command_runner.go:130] ! I0507 19:45:17.412831       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:48.028773    5068 command_runner.go:130] ! I0507 19:45:17.412948       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:48.029298    5068 command_runner.go:130] ! I0507 19:45:27.420776       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:48.029298    5068 command_runner.go:130] ! I0507 19:45:27.420889       1 main.go:227] handling current node
	I0507 19:55:48.029341    5068 command_runner.go:130] ! I0507 19:45:27.420901       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:48.029341    5068 command_runner.go:130] ! I0507 19:45:27.420910       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:48.029389    5068 command_runner.go:130] ! I0507 19:45:27.421607       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:48.029389    5068 command_runner.go:130] ! I0507 19:45:27.421717       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:48.029428    5068 command_runner.go:130] ! I0507 19:45:37.427913       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:48.029428    5068 command_runner.go:130] ! I0507 19:45:37.428076       1 main.go:227] handling current node
	I0507 19:55:48.029428    5068 command_runner.go:130] ! I0507 19:45:37.428090       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:48.029469    5068 command_runner.go:130] ! I0507 19:45:37.428099       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:48.029508    5068 command_runner.go:130] ! I0507 19:45:37.428614       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:48.029508    5068 command_runner.go:130] ! I0507 19:45:37.428647       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:48.029549    5068 command_runner.go:130] ! I0507 19:45:47.434296       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:48.029549    5068 command_runner.go:130] ! I0507 19:45:47.434399       1 main.go:227] handling current node
	I0507 19:55:48.029549    5068 command_runner.go:130] ! I0507 19:45:47.434412       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:48.029589    5068 command_runner.go:130] ! I0507 19:45:47.434420       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:48.029589    5068 command_runner.go:130] ! I0507 19:45:47.434745       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:48.029642    5068 command_runner.go:130] ! I0507 19:45:47.434773       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:48.029642    5068 command_runner.go:130] ! I0507 19:45:57.448460       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:48.029680    5068 command_runner.go:130] ! I0507 19:45:57.448499       1 main.go:227] handling current node
	I0507 19:55:48.029680    5068 command_runner.go:130] ! I0507 19:45:57.448510       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:48.029725    5068 command_runner.go:130] ! I0507 19:45:57.448517       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:48.029762    5068 command_runner.go:130] ! I0507 19:45:57.448949       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:48.029762    5068 command_runner.go:130] ! I0507 19:45:57.448981       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:48.029762    5068 command_runner.go:130] ! I0507 19:46:07.463804       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:48.029762    5068 command_runner.go:130] ! I0507 19:46:07.463844       1 main.go:227] handling current node
	I0507 19:55:48.029811    5068 command_runner.go:130] ! I0507 19:46:07.463855       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:48.029811    5068 command_runner.go:130] ! I0507 19:46:07.463863       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:48.029842    5068 command_runner.go:130] ! I0507 19:46:07.464346       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:48.029842    5068 command_runner.go:130] ! I0507 19:46:07.464378       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:48.029842    5068 command_runner.go:130] ! I0507 19:46:17.480817       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:48.029842    5068 command_runner.go:130] ! I0507 19:46:17.480973       1 main.go:227] handling current node
	I0507 19:55:48.029897    5068 command_runner.go:130] ! I0507 19:46:17.481017       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:48.029897    5068 command_runner.go:130] ! I0507 19:46:17.481027       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:48.029968    5068 command_runner.go:130] ! I0507 19:46:17.481217       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:48.029997    5068 command_runner.go:130] ! I0507 19:46:17.481364       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:48.029997    5068 command_runner.go:130] ! I0507 19:46:27.490098       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:48.030035    5068 command_runner.go:130] ! I0507 19:46:27.490193       1 main.go:227] handling current node
	I0507 19:55:48.030064    5068 command_runner.go:130] ! I0507 19:46:27.490207       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:48.030109    5068 command_runner.go:130] ! I0507 19:46:27.490215       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:48.030109    5068 command_runner.go:130] ! I0507 19:46:27.490319       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:48.030147    5068 command_runner.go:130] ! I0507 19:46:27.490331       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:48.030147    5068 command_runner.go:130] ! I0507 19:46:37.503127       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:48.030185    5068 command_runner.go:130] ! I0507 19:46:37.503153       1 main.go:227] handling current node
	I0507 19:55:48.030216    5068 command_runner.go:130] ! I0507 19:46:37.503164       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:48.030216    5068 command_runner.go:130] ! I0507 19:46:37.503171       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:48.030241    5068 command_runner.go:130] ! I0507 19:46:37.503279       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:48.030241    5068 command_runner.go:130] ! I0507 19:46:37.503286       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:48.030241    5068 command_runner.go:130] ! I0507 19:46:47.514408       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:48.030241    5068 command_runner.go:130] ! I0507 19:46:47.514504       1 main.go:227] handling current node
	I0507 19:55:48.030241    5068 command_runner.go:130] ! I0507 19:46:47.514516       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:48.030241    5068 command_runner.go:130] ! I0507 19:46:47.514524       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:48.030241    5068 command_runner.go:130] ! I0507 19:46:47.514650       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:48.030241    5068 command_runner.go:130] ! I0507 19:46:47.514661       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:48.030241    5068 command_runner.go:130] ! I0507 19:46:57.529281       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:48.030241    5068 command_runner.go:130] ! I0507 19:46:57.529381       1 main.go:227] handling current node
	I0507 19:55:48.030241    5068 command_runner.go:130] ! I0507 19:46:57.529394       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:48.030241    5068 command_runner.go:130] ! I0507 19:46:57.529402       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:48.030241    5068 command_runner.go:130] ! I0507 19:46:57.529689       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:48.030241    5068 command_runner.go:130] ! I0507 19:46:57.529898       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:48.030241    5068 command_runner.go:130] ! I0507 19:47:07.536805       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:48.030241    5068 command_runner.go:130] ! I0507 19:47:07.536841       1 main.go:227] handling current node
	I0507 19:55:48.030241    5068 command_runner.go:130] ! I0507 19:47:07.536852       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:48.030241    5068 command_runner.go:130] ! I0507 19:47:07.536859       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:48.030241    5068 command_runner.go:130] ! I0507 19:47:07.537080       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:48.030241    5068 command_runner.go:130] ! I0507 19:47:07.537103       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:48.030241    5068 command_runner.go:130] ! I0507 19:47:17.551699       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:48.030241    5068 command_runner.go:130] ! I0507 19:47:17.552050       1 main.go:227] handling current node
	I0507 19:55:48.030241    5068 command_runner.go:130] ! I0507 19:47:17.552126       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:48.030241    5068 command_runner.go:130] ! I0507 19:47:17.552206       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:48.030241    5068 command_runner.go:130] ! I0507 19:47:17.552600       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:48.030241    5068 command_runner.go:130] ! I0507 19:47:17.552777       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:48.030241    5068 command_runner.go:130] ! I0507 19:47:27.567122       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:48.030241    5068 command_runner.go:130] ! I0507 19:47:27.567214       1 main.go:227] handling current node
	I0507 19:55:48.030241    5068 command_runner.go:130] ! I0507 19:47:27.567227       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:48.030241    5068 command_runner.go:130] ! I0507 19:47:27.567251       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:48.030241    5068 command_runner.go:130] ! I0507 19:47:27.567365       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:48.030241    5068 command_runner.go:130] ! I0507 19:47:27.567376       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:48.030241    5068 command_runner.go:130] ! I0507 19:47:37.579248       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:48.030241    5068 command_runner.go:130] ! I0507 19:47:37.579334       1 main.go:227] handling current node
	I0507 19:55:48.030764    5068 command_runner.go:130] ! I0507 19:47:37.579346       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:48.030764    5068 command_runner.go:130] ! I0507 19:47:37.579352       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:48.030800    5068 command_runner.go:130] ! I0507 19:47:37.580168       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:48.030800    5068 command_runner.go:130] ! I0507 19:47:37.580202       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:48.030846    5068 command_runner.go:130] ! I0507 19:47:47.591084       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:48.030846    5068 command_runner.go:130] ! I0507 19:47:47.591125       1 main.go:227] handling current node
	I0507 19:55:48.030882    5068 command_runner.go:130] ! I0507 19:47:47.591136       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:48.030882    5068 command_runner.go:130] ! I0507 19:47:47.591143       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:48.030926    5068 command_runner.go:130] ! I0507 19:47:47.591350       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:48.030926    5068 command_runner.go:130] ! I0507 19:47:47.591365       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:48.030961    5068 command_runner.go:130] ! I0507 19:47:57.599687       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:48.031005    5068 command_runner.go:130] ! I0507 19:47:57.599780       1 main.go:227] handling current node
	I0507 19:55:48.031005    5068 command_runner.go:130] ! I0507 19:47:57.600282       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:48.031041    5068 command_runner.go:130] ! I0507 19:47:57.600376       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:48.031085    5068 command_runner.go:130] ! I0507 19:47:57.600829       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:48.031085    5068 command_runner.go:130] ! I0507 19:47:57.601089       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:48.031122    5068 command_runner.go:130] ! I0507 19:48:07.608877       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:48.031122    5068 command_runner.go:130] ! I0507 19:48:07.608973       1 main.go:227] handling current node
	I0507 19:55:48.031166    5068 command_runner.go:130] ! I0507 19:48:07.609012       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:48.031166    5068 command_runner.go:130] ! I0507 19:48:07.609021       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:48.031202    5068 command_runner.go:130] ! I0507 19:48:07.609341       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:48.031202    5068 command_runner.go:130] ! I0507 19:48:07.609437       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:48.031246    5068 command_runner.go:130] ! I0507 19:48:17.616839       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:48.031246    5068 command_runner.go:130] ! I0507 19:48:17.616948       1 main.go:227] handling current node
	I0507 19:55:48.031282    5068 command_runner.go:130] ! I0507 19:48:17.616962       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:48.031327    5068 command_runner.go:130] ! I0507 19:48:17.616970       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:48.031327    5068 command_runner.go:130] ! I0507 19:48:17.617201       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:48.031364    5068 command_runner.go:130] ! I0507 19:48:17.617302       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:48.031407    5068 command_runner.go:130] ! I0507 19:48:27.622610       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:48.031407    5068 command_runner.go:130] ! I0507 19:48:27.622773       1 main.go:227] handling current node
	I0507 19:55:48.031444    5068 command_runner.go:130] ! I0507 19:48:27.622786       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:48.031488    5068 command_runner.go:130] ! I0507 19:48:27.622794       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:48.031525    5068 command_runner.go:130] ! I0507 19:48:27.622907       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:48.031525    5068 command_runner.go:130] ! I0507 19:48:27.622913       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:48.031525    5068 command_runner.go:130] ! I0507 19:48:37.635466       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:48.031569    5068 command_runner.go:130] ! I0507 19:48:37.635567       1 main.go:227] handling current node
	I0507 19:55:48.031569    5068 command_runner.go:130] ! I0507 19:48:37.635581       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:48.031606    5068 command_runner.go:130] ! I0507 19:48:37.635588       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:48.031650    5068 command_runner.go:130] ! I0507 19:48:37.635708       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:48.031650    5068 command_runner.go:130] ! I0507 19:48:37.635731       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:48.031687    5068 command_runner.go:130] ! I0507 19:48:47.648680       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:48.031687    5068 command_runner.go:130] ! I0507 19:48:47.648719       1 main.go:227] handling current node
	I0507 19:55:48.031730    5068 command_runner.go:130] ! I0507 19:48:47.648730       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:48.031767    5068 command_runner.go:130] ! I0507 19:48:47.648736       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:48.031767    5068 command_runner.go:130] ! I0507 19:48:47.649047       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:48.031812    5068 command_runner.go:130] ! I0507 19:48:47.649073       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:48.031812    5068 command_runner.go:130] ! I0507 19:48:57.661624       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:48.031812    5068 command_runner.go:130] ! I0507 19:48:57.661723       1 main.go:227] handling current node
	I0507 19:55:48.031850    5068 command_runner.go:130] ! I0507 19:48:57.661736       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:48.031894    5068 command_runner.go:130] ! I0507 19:48:57.661745       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:48.031894    5068 command_runner.go:130] ! I0507 19:48:57.661906       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:48.031894    5068 command_runner.go:130] ! I0507 19:48:57.661973       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:48.031977    5068 command_runner.go:130] ! I0507 19:49:07.670042       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:48.031977    5068 command_runner.go:130] ! I0507 19:49:07.670434       1 main.go:227] handling current node
	I0507 19:55:48.031977    5068 command_runner.go:130] ! I0507 19:49:07.670598       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:48.031977    5068 command_runner.go:130] ! I0507 19:49:07.670611       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:48.031977    5068 command_runner.go:130] ! I0507 19:49:07.670874       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:48.031977    5068 command_runner.go:130] ! I0507 19:49:07.670892       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:48.032073    5068 command_runner.go:130] ! I0507 19:49:17.688752       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:48.032073    5068 command_runner.go:130] ! I0507 19:49:17.688862       1 main.go:227] handling current node
	I0507 19:55:48.032073    5068 command_runner.go:130] ! I0507 19:49:17.689132       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:48.032109    5068 command_runner.go:130] ! I0507 19:49:17.689148       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:48.032151    5068 command_runner.go:130] ! I0507 19:49:17.689445       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:48.032212    5068 command_runner.go:130] ! I0507 19:49:17.689461       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:48.032212    5068 command_runner.go:130] ! I0507 19:49:27.703795       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:48.032212    5068 command_runner.go:130] ! I0507 19:49:27.703825       1 main.go:227] handling current node
	I0507 19:55:48.032212    5068 command_runner.go:130] ! I0507 19:49:27.703838       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:48.032212    5068 command_runner.go:130] ! I0507 19:49:27.703846       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:48.032212    5068 command_runner.go:130] ! I0507 19:49:27.704329       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:48.032212    5068 command_runner.go:130] ! I0507 19:49:27.704365       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:48.032212    5068 command_runner.go:130] ! I0507 19:49:37.711372       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:48.032212    5068 command_runner.go:130] ! I0507 19:49:37.711497       1 main.go:227] handling current node
	I0507 19:55:48.032212    5068 command_runner.go:130] ! I0507 19:49:37.711514       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:48.032212    5068 command_runner.go:130] ! I0507 19:49:37.711524       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:48.032212    5068 command_runner.go:130] ! I0507 19:49:37.711882       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:48.032212    5068 command_runner.go:130] ! I0507 19:49:37.711917       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:48.032212    5068 command_runner.go:130] ! I0507 19:49:47.727743       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:48.032212    5068 command_runner.go:130] ! I0507 19:49:47.727786       1 main.go:227] handling current node
	I0507 19:55:48.032212    5068 command_runner.go:130] ! I0507 19:49:47.727798       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:48.032212    5068 command_runner.go:130] ! I0507 19:49:47.727806       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:48.032212    5068 command_runner.go:130] ! I0507 19:49:47.728278       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:48.032212    5068 command_runner.go:130] ! I0507 19:49:47.728401       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:48.032212    5068 command_runner.go:130] ! I0507 19:49:57.734796       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:48.032212    5068 command_runner.go:130] ! I0507 19:49:57.734892       1 main.go:227] handling current node
	I0507 19:55:48.032212    5068 command_runner.go:130] ! I0507 19:49:57.734905       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:48.032212    5068 command_runner.go:130] ! I0507 19:49:57.734913       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:48.032212    5068 command_runner.go:130] ! I0507 19:49:57.735055       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:48.032212    5068 command_runner.go:130] ! I0507 19:49:57.735077       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:48.032212    5068 command_runner.go:130] ! I0507 19:50:07.747486       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:48.032212    5068 command_runner.go:130] ! I0507 19:50:07.747598       1 main.go:227] handling current node
	I0507 19:55:48.032212    5068 command_runner.go:130] ! I0507 19:50:07.747612       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:48.032212    5068 command_runner.go:130] ! I0507 19:50:07.747621       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:48.032212    5068 command_runner.go:130] ! I0507 19:50:07.748185       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:48.032212    5068 command_runner.go:130] ! I0507 19:50:07.748222       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:48.032212    5068 command_runner.go:130] ! I0507 19:50:17.755602       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:48.032212    5068 command_runner.go:130] ! I0507 19:50:17.755761       1 main.go:227] handling current node
	I0507 19:55:48.032212    5068 command_runner.go:130] ! I0507 19:50:17.755774       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:48.032212    5068 command_runner.go:130] ! I0507 19:50:17.755782       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:48.032212    5068 command_runner.go:130] ! I0507 19:50:17.756227       1 main.go:223] Handling node with IPs: map[172.19.139.203:{}]
	I0507 19:55:48.032739    5068 command_runner.go:130] ! I0507 19:50:17.756267       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 19:55:48.032739    5068 command_runner.go:130] ! I0507 19:50:27.770562       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:48.032778    5068 command_runner.go:130] ! I0507 19:50:27.770678       1 main.go:227] handling current node
	I0507 19:55:48.032778    5068 command_runner.go:130] ! I0507 19:50:27.770692       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:48.032828    5068 command_runner.go:130] ! I0507 19:50:27.770700       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:48.032828    5068 command_runner.go:130] ! I0507 19:50:27.775735       1 main.go:223] Handling node with IPs: map[172.19.129.4:{}]
	I0507 19:55:48.032828    5068 command_runner.go:130] ! I0507 19:50:27.775767       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.3.0/24] 
	I0507 19:55:48.032901    5068 command_runner.go:130] ! I0507 19:50:27.775839       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.19.129.4 Flags: [] Table: 0} 
	I0507 19:55:48.032933    5068 command_runner.go:130] ! I0507 19:50:37.783936       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:48.032933    5068 command_runner.go:130] ! I0507 19:50:37.787174       1 main.go:227] handling current node
	I0507 19:55:48.032960    5068 command_runner.go:130] ! I0507 19:50:37.787394       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:48.032960    5068 command_runner.go:130] ! I0507 19:50:37.787449       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:48.032960    5068 command_runner.go:130] ! I0507 19:50:37.787687       1 main.go:223] Handling node with IPs: map[172.19.129.4:{}]
	I0507 19:55:48.032960    5068 command_runner.go:130] ! I0507 19:50:37.787791       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.3.0/24] 
	I0507 19:55:48.032960    5068 command_runner.go:130] ! I0507 19:50:47.804388       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:48.032960    5068 command_runner.go:130] ! I0507 19:50:47.804423       1 main.go:227] handling current node
	I0507 19:55:48.032960    5068 command_runner.go:130] ! I0507 19:50:47.804434       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:48.032960    5068 command_runner.go:130] ! I0507 19:50:47.804441       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:48.032960    5068 command_runner.go:130] ! I0507 19:50:47.805320       1 main.go:223] Handling node with IPs: map[172.19.129.4:{}]
	I0507 19:55:48.032960    5068 command_runner.go:130] ! I0507 19:50:47.805405       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.3.0/24] 
	I0507 19:55:48.032960    5068 command_runner.go:130] ! I0507 19:50:57.817550       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:48.032960    5068 command_runner.go:130] ! I0507 19:50:57.817645       1 main.go:227] handling current node
	I0507 19:55:48.032960    5068 command_runner.go:130] ! I0507 19:50:57.817660       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:48.032960    5068 command_runner.go:130] ! I0507 19:50:57.817668       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:48.032960    5068 command_runner.go:130] ! I0507 19:50:57.817802       1 main.go:223] Handling node with IPs: map[172.19.129.4:{}]
	I0507 19:55:48.032960    5068 command_runner.go:130] ! I0507 19:50:57.817829       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.3.0/24] 
	I0507 19:55:48.032960    5068 command_runner.go:130] ! I0507 19:51:07.829324       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:48.032960    5068 command_runner.go:130] ! I0507 19:51:07.829427       1 main.go:227] handling current node
	I0507 19:55:48.032960    5068 command_runner.go:130] ! I0507 19:51:07.829440       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:48.032960    5068 command_runner.go:130] ! I0507 19:51:07.829449       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:48.032960    5068 command_runner.go:130] ! I0507 19:51:07.829931       1 main.go:223] Handling node with IPs: map[172.19.129.4:{}]
	I0507 19:55:48.032960    5068 command_runner.go:130] ! I0507 19:51:07.830095       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.3.0/24] 
	I0507 19:55:48.032960    5068 command_runner.go:130] ! I0507 19:51:17.844953       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:48.032960    5068 command_runner.go:130] ! I0507 19:51:17.845032       1 main.go:227] handling current node
	I0507 19:55:48.032960    5068 command_runner.go:130] ! I0507 19:51:17.845046       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:48.032960    5068 command_runner.go:130] ! I0507 19:51:17.845128       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:48.032960    5068 command_runner.go:130] ! I0507 19:51:17.845337       1 main.go:223] Handling node with IPs: map[172.19.129.4:{}]
	I0507 19:55:48.032960    5068 command_runner.go:130] ! I0507 19:51:17.845367       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.3.0/24] 
	I0507 19:55:48.032960    5068 command_runner.go:130] ! I0507 19:51:27.851575       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:48.032960    5068 command_runner.go:130] ! I0507 19:51:27.851686       1 main.go:227] handling current node
	I0507 19:55:48.032960    5068 command_runner.go:130] ! I0507 19:51:27.851698       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:48.032960    5068 command_runner.go:130] ! I0507 19:51:27.851706       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:48.032960    5068 command_runner.go:130] ! I0507 19:51:27.852455       1 main.go:223] Handling node with IPs: map[172.19.129.4:{}]
	I0507 19:55:48.033486    5068 command_runner.go:130] ! I0507 19:51:27.852540       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.3.0/24] 
	I0507 19:55:48.033486    5068 command_runner.go:130] ! I0507 19:51:37.859761       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:48.033526    5068 command_runner.go:130] ! I0507 19:51:37.859857       1 main.go:227] handling current node
	I0507 19:55:48.033526    5068 command_runner.go:130] ! I0507 19:51:37.859871       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:48.033526    5068 command_runner.go:130] ! I0507 19:51:37.859930       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:48.033526    5068 command_runner.go:130] ! I0507 19:51:37.860319       1 main.go:223] Handling node with IPs: map[172.19.129.4:{}]
	I0507 19:55:48.033526    5068 command_runner.go:130] ! I0507 19:51:37.860413       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.3.0/24] 
	I0507 19:55:48.033526    5068 command_runner.go:130] ! I0507 19:51:47.872402       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:48.033526    5068 command_runner.go:130] ! I0507 19:51:47.872506       1 main.go:227] handling current node
	I0507 19:55:48.033526    5068 command_runner.go:130] ! I0507 19:51:47.872520       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:48.033526    5068 command_runner.go:130] ! I0507 19:51:47.872528       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:48.033526    5068 command_runner.go:130] ! I0507 19:51:47.872641       1 main.go:223] Handling node with IPs: map[172.19.129.4:{}]
	I0507 19:55:48.033526    5068 command_runner.go:130] ! I0507 19:51:47.872692       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.3.0/24] 
	I0507 19:55:48.033526    5068 command_runner.go:130] ! I0507 19:51:57.885508       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:48.033526    5068 command_runner.go:130] ! I0507 19:51:57.885541       1 main.go:227] handling current node
	I0507 19:55:48.033526    5068 command_runner.go:130] ! I0507 19:51:57.885551       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:48.033526    5068 command_runner.go:130] ! I0507 19:51:57.885556       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:48.033526    5068 command_runner.go:130] ! I0507 19:51:57.885664       1 main.go:223] Handling node with IPs: map[172.19.129.4:{}]
	I0507 19:55:48.033526    5068 command_runner.go:130] ! I0507 19:51:57.885730       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.3.0/24] 
	I0507 19:55:48.033526    5068 command_runner.go:130] ! I0507 19:52:07.898773       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:55:48.033526    5068 command_runner.go:130] ! I0507 19:52:07.899054       1 main.go:227] handling current node
	I0507 19:55:48.033526    5068 command_runner.go:130] ! I0507 19:52:07.899142       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:48.033526    5068 command_runner.go:130] ! I0507 19:52:07.899258       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:48.033526    5068 command_runner.go:130] ! I0507 19:52:07.899556       1 main.go:223] Handling node with IPs: map[172.19.129.4:{}]
	I0507 19:55:48.033526    5068 command_runner.go:130] ! I0507 19:52:07.899651       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.3.0/24] 
	I0507 19:55:48.049851    5068 logs.go:123] Gathering logs for Docker ...
	I0507 19:55:48.049851    5068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I0507 19:55:48.078780    5068 command_runner.go:130] > May 07 19:53:11 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0507 19:55:48.079377    5068 command_runner.go:130] > May 07 19:53:11 minikube cri-dockerd[223]: time="2024-05-07T19:53:11Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0507 19:55:48.079377    5068 command_runner.go:130] > May 07 19:53:11 minikube cri-dockerd[223]: time="2024-05-07T19:53:11Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0507 19:55:48.079377    5068 command_runner.go:130] > May 07 19:53:11 minikube cri-dockerd[223]: time="2024-05-07T19:53:11Z" level=info msg="Start docker client with request timeout 0s"
	I0507 19:55:48.079377    5068 command_runner.go:130] > May 07 19:53:11 minikube cri-dockerd[223]: time="2024-05-07T19:53:11Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0507 19:55:48.079377    5068 command_runner.go:130] > May 07 19:53:11 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0507 19:55:48.079462    5068 command_runner.go:130] > May 07 19:53:11 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0507 19:55:48.079462    5068 command_runner.go:130] > May 07 19:53:11 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0507 19:55:48.079462    5068 command_runner.go:130] > May 07 19:53:13 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 1.
	I0507 19:55:48.079462    5068 command_runner.go:130] > May 07 19:53:13 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0507 19:55:48.079462    5068 command_runner.go:130] > May 07 19:53:14 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0507 19:55:48.079540    5068 command_runner.go:130] > May 07 19:53:14 minikube cri-dockerd[420]: time="2024-05-07T19:53:14Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0507 19:55:48.079540    5068 command_runner.go:130] > May 07 19:53:14 minikube cri-dockerd[420]: time="2024-05-07T19:53:14Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0507 19:55:48.079540    5068 command_runner.go:130] > May 07 19:53:14 minikube cri-dockerd[420]: time="2024-05-07T19:53:14Z" level=info msg="Start docker client with request timeout 0s"
	I0507 19:55:48.079540    5068 command_runner.go:130] > May 07 19:53:14 minikube cri-dockerd[420]: time="2024-05-07T19:53:14Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0507 19:55:48.079540    5068 command_runner.go:130] > May 07 19:53:14 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0507 19:55:48.079628    5068 command_runner.go:130] > May 07 19:53:14 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0507 19:55:48.079655    5068 command_runner.go:130] > May 07 19:53:14 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0507 19:55:48.079655    5068 command_runner.go:130] > May 07 19:53:16 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 2.
	I0507 19:55:48.079655    5068 command_runner.go:130] > May 07 19:53:16 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0507 19:55:48.079655    5068 command_runner.go:130] > May 07 19:53:16 minikube systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0507 19:55:48.079721    5068 command_runner.go:130] > May 07 19:53:16 minikube cri-dockerd[428]: time="2024-05-07T19:53:16Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0507 19:55:48.079721    5068 command_runner.go:130] > May 07 19:53:16 minikube cri-dockerd[428]: time="2024-05-07T19:53:16Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0507 19:55:48.079747    5068 command_runner.go:130] > May 07 19:53:16 minikube cri-dockerd[428]: time="2024-05-07T19:53:16Z" level=info msg="Start docker client with request timeout 0s"
	I0507 19:55:48.079747    5068 command_runner.go:130] > May 07 19:53:16 minikube cri-dockerd[428]: time="2024-05-07T19:53:16Z" level=fatal msg="failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	I0507 19:55:48.079747    5068 command_runner.go:130] > May 07 19:53:16 minikube systemd[1]: cri-docker.service: Main process exited, code=exited, status=1/FAILURE
	I0507 19:55:48.079822    5068 command_runner.go:130] > May 07 19:53:16 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0507 19:55:48.079843    5068 command_runner.go:130] > May 07 19:53:16 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0507 19:55:48.079843    5068 command_runner.go:130] > May 07 19:53:18 minikube systemd[1]: cri-docker.service: Scheduled restart job, restart counter is at 3.
	I0507 19:55:48.079843    5068 command_runner.go:130] > May 07 19:53:18 minikube systemd[1]: Stopped CRI Interface for Docker Application Container Engine.
	I0507 19:55:48.079906    5068 command_runner.go:130] > May 07 19:53:18 minikube systemd[1]: cri-docker.service: Start request repeated too quickly.
	I0507 19:55:48.079931    5068 command_runner.go:130] > May 07 19:53:18 minikube systemd[1]: cri-docker.service: Failed with result 'exit-code'.
	I0507 19:55:48.079931    5068 command_runner.go:130] > May 07 19:53:18 minikube systemd[1]: Failed to start CRI Interface for Docker Application Container Engine.
	I0507 19:55:48.079959    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 systemd[1]: Starting Docker Application Container Engine...
	I0507 19:55:48.079959    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[656]: time="2024-05-07T19:53:56.261608662Z" level=info msg="Starting up"
	I0507 19:55:48.079959    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[656]: time="2024-05-07T19:53:56.264255181Z" level=info msg="containerd not running, starting managed containerd"
	I0507 19:55:48.080032    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[656]: time="2024-05-07T19:53:56.267798843Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=662
	I0507 19:55:48.080059    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.292663096Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0507 19:55:48.080090    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.316810753Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0507 19:55:48.080090    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.316928685Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0507 19:55:48.080130    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.317059021Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0507 19:55:48.080130    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.317074525Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0507 19:55:48.080172    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.317778516Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0507 19:55:48.080172    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.317870241Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0507 19:55:48.080172    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.318053591Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0507 19:55:48.080172    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.318181025Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0507 19:55:48.080172    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.318200831Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0507 19:55:48.080172    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.318211033Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0507 19:55:48.080172    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.318648452Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0507 19:55:48.080172    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.319370548Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0507 19:55:48.080172    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.322128697Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0507 19:55:48.080172    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.322287440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0507 19:55:48.080172    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.322423477Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0507 19:55:48.080172    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.322511301Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0507 19:55:48.080172    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.323103462Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0507 19:55:48.080172    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.323264406Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0507 19:55:48.080172    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.323281010Z" level=info msg="metadata content store policy set" policy=shared
	I0507 19:55:48.080172    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.329512102Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0507 19:55:48.080172    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.329607228Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0507 19:55:48.080172    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.329699453Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0507 19:55:48.080172    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.329991833Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0507 19:55:48.080172    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.330149675Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0507 19:55:48.080172    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.330391841Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0507 19:55:48.080172    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.331279682Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0507 19:55:48.080172    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.331558958Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0507 19:55:48.080172    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.331719502Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0507 19:55:48.080172    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.331752511Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0507 19:55:48.080172    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.331780218Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0507 19:55:48.080713    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.331804825Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0507 19:55:48.080713    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.332099005Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0507 19:55:48.080713    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.332235742Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0507 19:55:48.080713    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.332267150Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0507 19:55:48.080789    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.332290657Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0507 19:55:48.080814    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.332323766Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0507 19:55:48.080842    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.332346572Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0507 19:55:48.080842    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.332381181Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0507 19:55:48.080842    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.332407189Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0507 19:55:48.080842    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.332431795Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0507 19:55:48.080842    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.332459103Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0507 19:55:48.080842    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.332481509Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0507 19:55:48.080842    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.332504615Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0507 19:55:48.080842    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.332528722Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0507 19:55:48.080842    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.332552728Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0507 19:55:48.080842    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.332576134Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0507 19:55:48.080842    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.332603642Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0507 19:55:48.080842    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.332625548Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0507 19:55:48.080842    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.332651055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0507 19:55:48.080842    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.332673961Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0507 19:55:48.080842    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.333069468Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0507 19:55:48.080842    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.333235413Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0507 19:55:48.080842    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.333383554Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0507 19:55:48.080842    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.333414662Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0507 19:55:48.080842    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.333616417Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0507 19:55:48.080842    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.333710943Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0507 19:55:48.080842    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.333725547Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0507 19:55:48.080842    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.333736349Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0507 19:55:48.080842    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.333796266Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0507 19:55:48.080842    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.333810170Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0507 19:55:48.080842    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.333876888Z" level=info msg="NRI interface is disabled by configuration."
	I0507 19:55:48.080842    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.334581479Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0507 19:55:48.080842    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.334799638Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0507 19:55:48.080842    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.335014597Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0507 19:55:48.080842    5068 command_runner.go:130] > May 07 19:53:56 multinode-600000 dockerd[662]: time="2024-05-07T19:53:56.335347487Z" level=info msg="containerd successfully booted in 0.045275s"
	I0507 19:55:48.080842    5068 command_runner.go:130] > May 07 19:53:57 multinode-600000 dockerd[656]: time="2024-05-07T19:53:57.321187459Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0507 19:55:48.081370    5068 command_runner.go:130] > May 07 19:53:57 multinode-600000 dockerd[656]: time="2024-05-07T19:53:57.476287680Z" level=info msg="Loading containers: start."
	I0507 19:55:48.081370    5068 command_runner.go:130] > May 07 19:53:57 multinode-600000 dockerd[656]: time="2024-05-07T19:53:57.877079663Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0507 19:55:48.081370    5068 command_runner.go:130] > May 07 19:53:57 multinode-600000 dockerd[656]: time="2024-05-07T19:53:57.952570655Z" level=info msg="Loading containers: done."
	I0507 19:55:48.081451    5068 command_runner.go:130] > May 07 19:53:57 multinode-600000 dockerd[656]: time="2024-05-07T19:53:57.979382413Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0507 19:55:48.081451    5068 command_runner.go:130] > May 07 19:53:57 multinode-600000 dockerd[656]: time="2024-05-07T19:53:57.980260841Z" level=info msg="Daemon has completed initialization"
	I0507 19:55:48.081451    5068 command_runner.go:130] > May 07 19:53:58 multinode-600000 dockerd[656]: time="2024-05-07T19:53:58.031005949Z" level=info msg="API listen on [::]:2376"
	I0507 19:55:48.081451    5068 command_runner.go:130] > May 07 19:53:58 multinode-600000 systemd[1]: Started Docker Application Container Engine.
	I0507 19:55:48.081451    5068 command_runner.go:130] > May 07 19:53:58 multinode-600000 dockerd[656]: time="2024-05-07T19:53:58.031256476Z" level=info msg="API listen on /var/run/docker.sock"
	I0507 19:55:48.081528    5068 command_runner.go:130] > May 07 19:54:20 multinode-600000 systemd[1]: Stopping Docker Application Container Engine...
	I0507 19:55:48.081528    5068 command_runner.go:130] > May 07 19:54:20 multinode-600000 dockerd[656]: time="2024-05-07T19:54:20.774198260Z" level=info msg="Processing signal 'terminated'"
	I0507 19:55:48.081528    5068 command_runner.go:130] > May 07 19:54:20 multinode-600000 dockerd[656]: time="2024-05-07T19:54:20.776613097Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0507 19:55:48.081528    5068 command_runner.go:130] > May 07 19:54:20 multinode-600000 dockerd[656]: time="2024-05-07T19:54:20.776805608Z" level=info msg="Daemon shutdown complete"
	I0507 19:55:48.081528    5068 command_runner.go:130] > May 07 19:54:20 multinode-600000 dockerd[656]: time="2024-05-07T19:54:20.776895213Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0507 19:55:48.081603    5068 command_runner.go:130] > May 07 19:54:20 multinode-600000 dockerd[656]: time="2024-05-07T19:54:20.776925814Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0507 19:55:48.081603    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 systemd[1]: docker.service: Deactivated successfully.
	I0507 19:55:48.081603    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 systemd[1]: Stopped Docker Application Container Engine.
	I0507 19:55:48.081603    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 systemd[1]: Starting Docker Application Container Engine...
	I0507 19:55:48.081603    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1047]: time="2024-05-07T19:54:21.844803108Z" level=info msg="Starting up"
	I0507 19:55:48.081678    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1047]: time="2024-05-07T19:54:21.845592952Z" level=info msg="containerd not running, starting managed containerd"
	I0507 19:55:48.081678    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1047]: time="2024-05-07T19:54:21.846791420Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1053
	I0507 19:55:48.081678    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.877926981Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	I0507 19:55:48.081678    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.907006826Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0507 19:55:48.081754    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.907105131Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0507 19:55:48.081754    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.907143533Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0507 19:55:48.081754    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.907156034Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0507 19:55:48.081829    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.907277841Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0507 19:55:48.081829    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.907322244Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0507 19:55:48.081829    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.907477852Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0507 19:55:48.081905    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.907596759Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0507 19:55:48.081905    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.907616260Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	I0507 19:55:48.081905    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.907627661Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0507 19:55:48.081905    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.907658363Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0507 19:55:48.081979    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.907868674Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0507 19:55:48.081979    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.910668333Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0507 19:55:48.081979    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.910832542Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0507 19:55:48.082063    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.910974650Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0507 19:55:48.082063    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.911056755Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0507 19:55:48.082063    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.911079056Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0507 19:55:48.082063    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.911093757Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	I0507 19:55:48.082063    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.911103457Z" level=info msg="metadata content store policy set" policy=shared
	I0507 19:55:48.082152    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.911348471Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0507 19:55:48.082152    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.911388073Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0507 19:55:48.082152    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.911402674Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0507 19:55:48.082219    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.911415475Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0507 19:55:48.082219    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.911427076Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0507 19:55:48.082219    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.911464678Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0507 19:55:48.082219    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.911666589Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0507 19:55:48.082290    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.911840999Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0507 19:55:48.082290    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.911855900Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0507 19:55:48.082290    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.911868601Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0507 19:55:48.082290    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.911909603Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0507 19:55:48.082366    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.911924204Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0507 19:55:48.082366    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.911941405Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0507 19:55:48.082366    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.911955506Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0507 19:55:48.082439    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.911969406Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0507 19:55:48.082439    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.911987907Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0507 19:55:48.082439    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.912002408Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0507 19:55:48.082506    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.912014509Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0507 19:55:48.082506    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.912032910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0507 19:55:48.082506    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.912048811Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0507 19:55:48.082573    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.912061212Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0507 19:55:48.082573    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.912073812Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0507 19:55:48.082573    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.912085813Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0507 19:55:48.082573    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.912098614Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0507 19:55:48.082642    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.912110514Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0507 19:55:48.082642    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.912123015Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0507 19:55:48.082642    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.912136916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0507 19:55:48.082708    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.912151617Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0507 19:55:48.082708    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.912162617Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0507 19:55:48.082708    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.912174218Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0507 19:55:48.082708    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.912189019Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0507 19:55:48.082778    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.912203420Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0507 19:55:48.082778    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.912223321Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0507 19:55:48.082778    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.912235321Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0507 19:55:48.082845    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.912245922Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0507 19:55:48.082845    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.912307726Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0507 19:55:48.082845    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.912877958Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	I0507 19:55:48.082845    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.912987064Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0507 19:55:48.082917    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.913005665Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	I0507 19:55:48.082917    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.913060968Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0507 19:55:48.082980    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.913148473Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0507 19:55:48.082980    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.913162874Z" level=info msg="NRI interface is disabled by configuration."
	I0507 19:55:48.083044    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.913518894Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0507 19:55:48.083044    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.913666902Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0507 19:55:48.083044    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.913836712Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0507 19:55:48.083044    5068 command_runner.go:130] > May 07 19:54:21 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:21.913869014Z" level=info msg="containerd successfully booted in 0.037038s"
	I0507 19:55:48.083124    5068 command_runner.go:130] > May 07 19:54:22 multinode-600000 dockerd[1047]: time="2024-05-07T19:54:22.886642029Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0507 19:55:48.083142    5068 command_runner.go:130] > May 07 19:54:22 multinode-600000 dockerd[1047]: time="2024-05-07T19:54:22.917701485Z" level=info msg="Loading containers: start."
	I0507 19:55:48.083142    5068 command_runner.go:130] > May 07 19:54:23 multinode-600000 dockerd[1047]: time="2024-05-07T19:54:23.220079986Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0507 19:55:48.083142    5068 command_runner.go:130] > May 07 19:54:23 multinode-600000 dockerd[1047]: time="2024-05-07T19:54:23.297928389Z" level=info msg="Loading containers: done."
	I0507 19:55:48.083211    5068 command_runner.go:130] > May 07 19:54:23 multinode-600000 dockerd[1047]: time="2024-05-07T19:54:23.323426131Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	I0507 19:55:48.083211    5068 command_runner.go:130] > May 07 19:54:23 multinode-600000 dockerd[1047]: time="2024-05-07T19:54:23.323561939Z" level=info msg="Daemon has completed initialization"
	I0507 19:55:48.083211    5068 command_runner.go:130] > May 07 19:54:23 multinode-600000 dockerd[1047]: time="2024-05-07T19:54:23.371361642Z" level=info msg="API listen on /var/run/docker.sock"
	I0507 19:55:48.083281    5068 command_runner.go:130] > May 07 19:54:23 multinode-600000 dockerd[1047]: time="2024-05-07T19:54:23.371563053Z" level=info msg="API listen on [::]:2376"
	I0507 19:55:48.083281    5068 command_runner.go:130] > May 07 19:54:23 multinode-600000 systemd[1]: Started Docker Application Container Engine.
	I0507 19:55:48.083281    5068 command_runner.go:130] > May 07 19:54:24 multinode-600000 systemd[1]: Starting CRI Interface for Docker Application Container Engine...
	I0507 19:55:48.083281    5068 command_runner.go:130] > May 07 19:54:24 multinode-600000 cri-dockerd[1274]: time="2024-05-07T19:54:24Z" level=info msg="Starting cri-dockerd 0.3.12 (c2e3805)"
	I0507 19:55:48.083281    5068 command_runner.go:130] > May 07 19:54:24 multinode-600000 cri-dockerd[1274]: time="2024-05-07T19:54:24Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	I0507 19:55:48.083351    5068 command_runner.go:130] > May 07 19:54:24 multinode-600000 cri-dockerd[1274]: time="2024-05-07T19:54:24Z" level=info msg="Start docker client with request timeout 0s"
	I0507 19:55:48.083351    5068 command_runner.go:130] > May 07 19:54:24 multinode-600000 cri-dockerd[1274]: time="2024-05-07T19:54:24Z" level=info msg="Hairpin mode is set to hairpin-veth"
	I0507 19:55:48.083351    5068 command_runner.go:130] > May 07 19:54:24 multinode-600000 cri-dockerd[1274]: time="2024-05-07T19:54:24Z" level=info msg="Loaded network plugin cni"
	I0507 19:55:48.083351    5068 command_runner.go:130] > May 07 19:54:24 multinode-600000 cri-dockerd[1274]: time="2024-05-07T19:54:24Z" level=info msg="Docker cri networking managed by network plugin cni"
	I0507 19:55:48.083418    5068 command_runner.go:130] > May 07 19:54:24 multinode-600000 cri-dockerd[1274]: time="2024-05-07T19:54:24Z" level=info msg="Setting cgroupDriver cgroupfs"
	I0507 19:55:48.083418    5068 command_runner.go:130] > May 07 19:54:24 multinode-600000 cri-dockerd[1274]: time="2024-05-07T19:54:24Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	I0507 19:55:48.083418    5068 command_runner.go:130] > May 07 19:54:24 multinode-600000 cri-dockerd[1274]: time="2024-05-07T19:54:24Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	I0507 19:55:48.083418    5068 command_runner.go:130] > May 07 19:54:24 multinode-600000 cri-dockerd[1274]: time="2024-05-07T19:54:24Z" level=info msg="Start cri-dockerd grpc backend"
	I0507 19:55:48.083484    5068 command_runner.go:130] > May 07 19:54:24 multinode-600000 systemd[1]: Started CRI Interface for Docker Application Container Engine.
	I0507 19:55:48.083484    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 cri-dockerd[1274]: time="2024-05-07T19:54:28Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"coredns-7db6d8ff4d-5j966_kube-system\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"99af61c6e282aa13c7209e469e5e354f24968796fc455a65fdf2e8611f760994\""
	I0507 19:55:48.083550    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 cri-dockerd[1274]: time="2024-05-07T19:54:28Z" level=info msg="Failed to read pod IP from plugin/docker: networkPlugin cni failed on the status hook for pod \"busybox-fc5497c4f-gcqlv_default\": CNI failed to retrieve network namespace path: cannot find network namespace for the terminated container \"4afb10dc8b11575b4eaa25a6b283141c6e029c9b44d3db3a69e4c934171b778e\""
	I0507 19:55:48.083550    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:29.542938073Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0507 19:55:48.083550    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:29.543010577Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0507 19:55:48.083618    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:29.543042179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:48.083618    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:29.543273292Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:48.083618    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 cri-dockerd[1274]: time="2024-05-07T19:54:29Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/89c8a2313bcaf38f51cf6dbb015e4b3d1ed11fef724fa2a2ecfd86165a93435e/resolv.conf as [nameserver 172.19.128.1]"
	I0507 19:55:48.083693    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:29.675480269Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0507 19:55:48.083693    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:29.675546573Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0507 19:55:48.083693    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:29.675564974Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:48.083759    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:29.684262666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:48.083759    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:29.725921222Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0507 19:55:48.083759    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:29.726068230Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0507 19:55:48.083832    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:29.726254241Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:48.083832    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:29.726575359Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:48.083832    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:29.765272147Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0507 19:55:48.083832    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:29.765421056Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0507 19:55:48.083908    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:29.765494660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:48.083908    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:29.766208600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:48.083908    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 cri-dockerd[1274]: time="2024-05-07T19:54:29Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5c37290307d14956d6c732916d8f8cad779b8e57047c0b20cc5a97abeea21709/resolv.conf as [nameserver 172.19.128.1]"
	I0507 19:55:48.083978    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 cri-dockerd[1274]: time="2024-05-07T19:54:29Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/c666fba0d07531cb6ff4a110f6538c8fbffaa474e8b7744eecd95c2c5449ac24/resolv.conf as [nameserver 172.19.128.1]"
	I0507 19:55:48.083978    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:29.943914850Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0507 19:55:48.084044    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:29.944218768Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0507 19:55:48.084044    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:29.944339474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:48.084044    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:29.944568887Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:48.084044    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 cri-dockerd[1274]: time="2024-05-07T19:54:29Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/fec63580ff2669cca3046ae403d6a288bb279ca84766c91bd6464d8b2335c567/resolv.conf as [nameserver 172.19.128.1]"
	I0507 19:55:48.084141    5068 command_runner.go:130] > May 07 19:54:30 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:30.094912590Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0507 19:55:48.084141    5068 command_runner.go:130] > May 07 19:54:30 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:30.095972050Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0507 19:55:48.084141    5068 command_runner.go:130] > May 07 19:54:30 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:30.096703691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:48.084208    5068 command_runner.go:130] > May 07 19:54:30 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:30.098389387Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:48.084208    5068 command_runner.go:130] > May 07 19:54:30 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:30.174777807Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0507 19:55:48.084270    5068 command_runner.go:130] > May 07 19:54:30 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:30.174917115Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0507 19:55:48.084293    5068 command_runner.go:130] > May 07 19:54:30 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:30.174947116Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:48.084293    5068 command_runner.go:130] > May 07 19:54:30 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:30.175427944Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:48.084350    5068 command_runner.go:130] > May 07 19:54:30 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:30.179401568Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0507 19:55:48.084372    5068 command_runner.go:130] > May 07 19:54:30 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:30.180225415Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0507 19:55:48.084372    5068 command_runner.go:130] > May 07 19:54:30 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:30.180387824Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:48.084372    5068 command_runner.go:130] > May 07 19:54:30 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:30.180691941Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:48.084372    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 cri-dockerd[1274]: time="2024-05-07T19:54:33Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	I0507 19:55:48.084372    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:34.393545198Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0507 19:55:48.084372    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:34.393776611Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0507 19:55:48.084372    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:34.393798612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:48.084372    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:34.393904518Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:48.084372    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:34.429313521Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0507 19:55:48.084372    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:34.429355823Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0507 19:55:48.084372    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:34.429371924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:48.084372    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:34.429510732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:48.084372    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:34.450929143Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0507 19:55:48.084372    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:34.451230160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0507 19:55:48.084372    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:34.451320165Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:48.084372    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:34.451541578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:48.084372    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 cri-dockerd[1274]: time="2024-05-07T19:54:34Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/09d2fda974adf9dbabc54b3412155043fbda490a951a6b325ac66ef3e385e99d/resolv.conf as [nameserver 172.19.128.1]"
	I0507 19:55:48.084372    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 cri-dockerd[1274]: time="2024-05-07T19:54:34Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/deb171c003562d2f3e3c8e1ec2fbec5ecaa700e48e277dd0cc50addf6cbb21a3/resolv.conf as [nameserver 172.19.128.1]"
	I0507 19:55:48.084372    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 cri-dockerd[1274]: time="2024-05-07T19:54:34Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/857f6b563091091373f72d143ed2af0ab7469cb77eb82675a7f665d172f1793a/resolv.conf as [nameserver 172.19.128.1]"
	I0507 19:55:48.084372    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:34.950666506Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0507 19:55:48.084973    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:34.951075429Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0507 19:55:48.084973    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:34.951189235Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:48.084973    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:34.951373146Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:48.084973    5068 command_runner.go:130] > May 07 19:54:35 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:35.055721147Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0507 19:55:48.084973    5068 command_runner.go:130] > May 07 19:54:35 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:35.055815952Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0507 19:55:48.084973    5068 command_runner.go:130] > May 07 19:54:35 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:35.055860855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:48.084973    5068 command_runner.go:130] > May 07 19:54:35 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:35.056635099Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:48.084973    5068 command_runner.go:130] > May 07 19:54:35 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:35.189264699Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0507 19:55:48.084973    5068 command_runner.go:130] > May 07 19:54:35 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:35.189723325Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0507 19:55:48.084973    5068 command_runner.go:130] > May 07 19:54:35 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:35.189831731Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:48.084973    5068 command_runner.go:130] > May 07 19:54:35 multinode-600000 dockerd[1053]: time="2024-05-07T19:54:35.190012442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:48.084973    5068 command_runner.go:130] > May 07 19:55:05 multinode-600000 dockerd[1047]: time="2024-05-07T19:55:05.347820040Z" level=info msg="ignoring event" container=d1e3e4629bc4ab52c27aca01f9ac01a28969e78a370077ee687920a51d952e19 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	I0507 19:55:48.084973    5068 command_runner.go:130] > May 07 19:55:05 multinode-600000 dockerd[1053]: time="2024-05-07T19:55:05.348040655Z" level=info msg="shim disconnected" id=d1e3e4629bc4ab52c27aca01f9ac01a28969e78a370077ee687920a51d952e19 namespace=moby
	I0507 19:55:48.084973    5068 command_runner.go:130] > May 07 19:55:05 multinode-600000 dockerd[1053]: time="2024-05-07T19:55:05.348091458Z" level=warning msg="cleaning up after shim disconnected" id=d1e3e4629bc4ab52c27aca01f9ac01a28969e78a370077ee687920a51d952e19 namespace=moby
	I0507 19:55:48.084973    5068 command_runner.go:130] > May 07 19:55:05 multinode-600000 dockerd[1053]: time="2024-05-07T19:55:05.348099558Z" level=info msg="cleaning up dead shim" namespace=moby
	I0507 19:55:48.084973    5068 command_runner.go:130] > May 07 19:55:17 multinode-600000 dockerd[1053]: time="2024-05-07T19:55:17.037412688Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0507 19:55:48.084973    5068 command_runner.go:130] > May 07 19:55:17 multinode-600000 dockerd[1053]: time="2024-05-07T19:55:17.037563097Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0507 19:55:48.084973    5068 command_runner.go:130] > May 07 19:55:17 multinode-600000 dockerd[1053]: time="2024-05-07T19:55:17.037957521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:48.084973    5068 command_runner.go:130] > May 07 19:55:17 multinode-600000 dockerd[1053]: time="2024-05-07T19:55:17.038368445Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:48.084973    5068 command_runner.go:130] > May 07 19:55:38 multinode-600000 dockerd[1053]: time="2024-05-07T19:55:38.073681495Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0507 19:55:48.084973    5068 command_runner.go:130] > May 07 19:55:38 multinode-600000 dockerd[1053]: time="2024-05-07T19:55:38.075144480Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0507 19:55:48.084973    5068 command_runner.go:130] > May 07 19:55:38 multinode-600000 dockerd[1053]: time="2024-05-07T19:55:38.075421996Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:48.084973    5068 command_runner.go:130] > May 07 19:55:38 multinode-600000 dockerd[1053]: time="2024-05-07T19:55:38.075618907Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:48.084973    5068 command_runner.go:130] > May 07 19:55:38 multinode-600000 dockerd[1053]: time="2024-05-07T19:55:38.083978388Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0507 19:55:48.084973    5068 command_runner.go:130] > May 07 19:55:38 multinode-600000 dockerd[1053]: time="2024-05-07T19:55:38.085517877Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0507 19:55:48.084973    5068 command_runner.go:130] > May 07 19:55:38 multinode-600000 dockerd[1053]: time="2024-05-07T19:55:38.085609682Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:48.084973    5068 command_runner.go:130] > May 07 19:55:38 multinode-600000 dockerd[1053]: time="2024-05-07T19:55:38.085891498Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:48.084973    5068 command_runner.go:130] > May 07 19:55:38 multinode-600000 cri-dockerd[1274]: time="2024-05-07T19:55:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/56c438bec17775a85810d84da03e966b7c8b3307695f327170eb2d1f6f413190/resolv.conf as [nameserver 172.19.128.1]"
	I0507 19:55:48.085535    5068 command_runner.go:130] > May 07 19:55:38 multinode-600000 cri-dockerd[1274]: time="2024-05-07T19:55:38Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f8dc35309168fbb7208444e18cedbe0a5ab2522d363e8b998b56b731b941b23c/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	I0507 19:55:48.085535    5068 command_runner.go:130] > May 07 19:55:38 multinode-600000 dockerd[1053]: time="2024-05-07T19:55:38.552043154Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0507 19:55:48.085535    5068 command_runner.go:130] > May 07 19:55:38 multinode-600000 dockerd[1053]: time="2024-05-07T19:55:38.552176862Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0507 19:55:48.085613    5068 command_runner.go:130] > May 07 19:55:38 multinode-600000 dockerd[1053]: time="2024-05-07T19:55:38.552192263Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:48.085613    5068 command_runner.go:130] > May 07 19:55:38 multinode-600000 dockerd[1053]: time="2024-05-07T19:55:38.552275368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:48.085613    5068 command_runner.go:130] > May 07 19:55:38 multinode-600000 dockerd[1053]: time="2024-05-07T19:55:38.595560233Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	I0507 19:55:48.085613    5068 command_runner.go:130] > May 07 19:55:38 multinode-600000 dockerd[1053]: time="2024-05-07T19:55:38.595882353Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	I0507 19:55:48.085613    5068 command_runner.go:130] > May 07 19:55:38 multinode-600000 dockerd[1053]: time="2024-05-07T19:55:38.595904855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:48.085613    5068 command_runner.go:130] > May 07 19:55:38 multinode-600000 dockerd[1053]: time="2024-05-07T19:55:38.596079265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	I0507 19:55:48.085613    5068 command_runner.go:130] > May 07 19:55:40 multinode-600000 dockerd[1047]: 2024/05/07 19:55:40 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0507 19:55:48.085613    5068 command_runner.go:130] > May 07 19:55:40 multinode-600000 dockerd[1047]: 2024/05/07 19:55:40 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0507 19:55:48.085613    5068 command_runner.go:130] > May 07 19:55:40 multinode-600000 dockerd[1047]: 2024/05/07 19:55:40 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0507 19:55:48.085613    5068 command_runner.go:130] > May 07 19:55:40 multinode-600000 dockerd[1047]: 2024/05/07 19:55:40 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0507 19:55:48.085613    5068 command_runner.go:130] > May 07 19:55:40 multinode-600000 dockerd[1047]: 2024/05/07 19:55:40 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0507 19:55:48.085613    5068 command_runner.go:130] > May 07 19:55:40 multinode-600000 dockerd[1047]: 2024/05/07 19:55:40 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0507 19:55:48.085613    5068 command_runner.go:130] > May 07 19:55:40 multinode-600000 dockerd[1047]: 2024/05/07 19:55:40 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0507 19:55:48.085613    5068 command_runner.go:130] > May 07 19:55:40 multinode-600000 dockerd[1047]: 2024/05/07 19:55:40 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0507 19:55:48.085613    5068 command_runner.go:130] > May 07 19:55:41 multinode-600000 dockerd[1047]: 2024/05/07 19:55:41 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0507 19:55:48.085613    5068 command_runner.go:130] > May 07 19:55:41 multinode-600000 dockerd[1047]: 2024/05/07 19:55:41 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0507 19:55:48.085613    5068 command_runner.go:130] > May 07 19:55:41 multinode-600000 dockerd[1047]: 2024/05/07 19:55:41 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0507 19:55:48.085613    5068 command_runner.go:130] > May 07 19:55:41 multinode-600000 dockerd[1047]: 2024/05/07 19:55:41 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0507 19:55:48.085613    5068 command_runner.go:130] > May 07 19:55:44 multinode-600000 dockerd[1047]: 2024/05/07 19:55:44 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0507 19:55:48.085613    5068 command_runner.go:130] > May 07 19:55:44 multinode-600000 dockerd[1047]: 2024/05/07 19:55:44 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0507 19:55:48.085613    5068 command_runner.go:130] > May 07 19:55:44 multinode-600000 dockerd[1047]: 2024/05/07 19:55:44 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0507 19:55:48.085613    5068 command_runner.go:130] > May 07 19:55:44 multinode-600000 dockerd[1047]: 2024/05/07 19:55:44 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0507 19:55:48.085613    5068 command_runner.go:130] > May 07 19:55:44 multinode-600000 dockerd[1047]: 2024/05/07 19:55:44 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0507 19:55:48.085613    5068 command_runner.go:130] > May 07 19:55:44 multinode-600000 dockerd[1047]: 2024/05/07 19:55:44 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0507 19:55:48.085613    5068 command_runner.go:130] > May 07 19:55:44 multinode-600000 dockerd[1047]: 2024/05/07 19:55:44 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0507 19:55:48.085613    5068 command_runner.go:130] > May 07 19:55:44 multinode-600000 dockerd[1047]: 2024/05/07 19:55:44 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0507 19:55:48.085613    5068 command_runner.go:130] > May 07 19:55:44 multinode-600000 dockerd[1047]: 2024/05/07 19:55:44 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0507 19:55:48.086137    5068 command_runner.go:130] > May 07 19:55:44 multinode-600000 dockerd[1047]: 2024/05/07 19:55:44 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0507 19:55:48.086137    5068 command_runner.go:130] > May 07 19:55:45 multinode-600000 dockerd[1047]: 2024/05/07 19:55:45 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0507 19:55:48.086137    5068 command_runner.go:130] > May 07 19:55:45 multinode-600000 dockerd[1047]: 2024/05/07 19:55:45 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0507 19:55:48.086219    5068 command_runner.go:130] > May 07 19:55:48 multinode-600000 dockerd[1047]: 2024/05/07 19:55:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0507 19:55:48.086219    5068 command_runner.go:130] > May 07 19:55:48 multinode-600000 dockerd[1047]: 2024/05/07 19:55:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0507 19:55:48.086219    5068 command_runner.go:130] > May 07 19:55:48 multinode-600000 dockerd[1047]: 2024/05/07 19:55:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0507 19:55:48.086219    5068 command_runner.go:130] > May 07 19:55:48 multinode-600000 dockerd[1047]: 2024/05/07 19:55:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0507 19:55:48.086219    5068 command_runner.go:130] > May 07 19:55:48 multinode-600000 dockerd[1047]: 2024/05/07 19:55:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0507 19:55:48.086219    5068 command_runner.go:130] > May 07 19:55:48 multinode-600000 dockerd[1047]: 2024/05/07 19:55:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	I0507 19:55:48.114713    5068 logs.go:123] Gathering logs for kubelet ...
	I0507 19:55:48.114713    5068 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 19:55:48.145631    5068 command_runner.go:130] > May 07 19:54:25 multinode-600000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0507 19:55:48.145691    5068 command_runner.go:130] > May 07 19:54:25 multinode-600000 kubelet[1385]: I0507 19:54:25.312690    1385 server.go:484] "Kubelet version" kubeletVersion="v1.30.0"
	I0507 19:55:48.145723    5068 command_runner.go:130] > May 07 19:54:25 multinode-600000 kubelet[1385]: I0507 19:54:25.313053    1385 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0507 19:55:48.145723    5068 command_runner.go:130] > May 07 19:54:25 multinode-600000 kubelet[1385]: I0507 19:54:25.314038    1385 server.go:927] "Client rotation is on, will bootstrap in background"
	I0507 19:55:48.145781    5068 command_runner.go:130] > May 07 19:54:25 multinode-600000 kubelet[1385]: E0507 19:54:25.314980    1385 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0507 19:55:48.145811    5068 command_runner.go:130] > May 07 19:54:25 multinode-600000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0507 19:55:48.145811    5068 command_runner.go:130] > May 07 19:54:25 multinode-600000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0507 19:55:48.145811    5068 command_runner.go:130] > May 07 19:54:25 multinode-600000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	I0507 19:55:48.145811    5068 command_runner.go:130] > May 07 19:54:25 multinode-600000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0507 19:55:48.145868    5068 command_runner.go:130] > May 07 19:54:25 multinode-600000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0507 19:55:48.145898    5068 command_runner.go:130] > May 07 19:54:26 multinode-600000 kubelet[1417]: I0507 19:54:26.032056    1417 server.go:484] "Kubelet version" kubeletVersion="v1.30.0"
	I0507 19:55:48.145898    5068 command_runner.go:130] > May 07 19:54:26 multinode-600000 kubelet[1417]: I0507 19:54:26.032321    1417 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0507 19:55:48.145898    5068 command_runner.go:130] > May 07 19:54:26 multinode-600000 kubelet[1417]: I0507 19:54:26.032668    1417 server.go:927] "Client rotation is on, will bootstrap in background"
	I0507 19:55:48.145963    5068 command_runner.go:130] > May 07 19:54:26 multinode-600000 kubelet[1417]: E0507 19:54:26.032817    1417 run.go:74] "command failed" err="failed to run Kubelet: unable to load bootstrap kubeconfig: stat /etc/kubernetes/bootstrap-kubelet.conf: no such file or directory"
	I0507 19:55:48.146001    5068 command_runner.go:130] > May 07 19:54:26 multinode-600000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	I0507 19:55:48.146021    5068 command_runner.go:130] > May 07 19:54:26 multinode-600000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	I0507 19:55:48.146021    5068 command_runner.go:130] > May 07 19:54:26 multinode-600000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2.
	I0507 19:55:48.146021    5068 command_runner.go:130] > May 07 19:54:26 multinode-600000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0507 19:55:48.146021    5068 command_runner.go:130] > May 07 19:54:26 multinode-600000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0507 19:55:48.146021    5068 command_runner.go:130] > May 07 19:54:26 multinode-600000 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	I0507 19:55:48.146021    5068 command_runner.go:130] > May 07 19:54:26 multinode-600000 systemd[1]: kubelet.service: Deactivated successfully.
	I0507 19:55:48.146021    5068 command_runner.go:130] > May 07 19:54:26 multinode-600000 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	I0507 19:55:48.146021    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	I0507 19:55:48.146021    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.682448    1526 server.go:484] "Kubelet version" kubeletVersion="v1.30.0"
	I0507 19:55:48.146021    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.683051    1526 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0507 19:55:48.146021    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.683318    1526 server.go:927] "Client rotation is on, will bootstrap in background"
	I0507 19:55:48.146021    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.685208    1526 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	I0507 19:55:48.146021    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.694353    1526 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0507 19:55:48.146021    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.719318    1526 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /"
	I0507 19:55:48.146021    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.719480    1526 server.go:810] "NoSwap is set due to memorySwapBehavior not specified" memorySwapBehavior="" FailSwapOn=false
	I0507 19:55:48.146021    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.720216    1526 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[]
	I0507 19:55:48.146021    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.720309    1526 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"multinode-600000","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"Top
ologyManagerPolicy":"none","TopologyManagerPolicyOptions":null}
	I0507 19:55:48.146021    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.720926    1526 topology_manager.go:138] "Creating topology manager with none policy"
	I0507 19:55:48.146548    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.721001    1526 container_manager_linux.go:301] "Creating device plugin manager"
	I0507 19:55:48.146548    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.721416    1526 state_mem.go:36] "Initialized new in-memory state store"
	I0507 19:55:48.146607    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.723173    1526 kubelet.go:400] "Attempting to sync node with API server"
	I0507 19:55:48.146642    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.723253    1526 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests"
	I0507 19:55:48.146702    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.723313    1526 kubelet.go:312] "Adding apiserver pod source"
	I0507 19:55:48.146702    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.723974    1526 apiserver.go:42] "Waiting for node sync before watching apiserver pods"
	I0507 19:55:48.146702    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: W0507 19:54:28.726787    1526 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-600000&limit=500&resourceVersion=0": dial tcp 172.19.135.22:8443: connect: connection refused
	I0507 19:55:48.146702    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: E0507 19:54:28.726939    1526 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-600000&limit=500&resourceVersion=0": dial tcp 172.19.135.22:8443: connect: connection refused
	I0507 19:55:48.146702    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.731381    1526 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="docker" version="26.0.2" apiVersion="v1"
	I0507 19:55:48.146702    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.733269    1526 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode"
	I0507 19:55:48.146702    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: W0507 19:54:28.734851    1526 probe.go:272] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
	I0507 19:55:48.146702    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.736816    1526 server.go:1264] "Started kubelet"
	I0507 19:55:48.146702    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: W0507 19:54:28.737228    1526 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.19.135.22:8443: connect: connection refused
	I0507 19:55:48.146702    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: E0507 19:54:28.737335    1526 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.19.135.22:8443: connect: connection refused
	I0507 19:55:48.146702    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.738410    1526 server.go:163] "Starting to listen" address="0.0.0.0" port=10250
	I0507 19:55:48.146702    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.740846    1526 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10
	I0507 19:55:48.146702    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.742005    1526 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock"
	I0507 19:55:48.146702    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: E0507 19:54:28.742309    1526 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://control-plane.minikube.internal:8443/api/v1/namespaces/default/events\": dial tcp 172.19.135.22:8443: connect: connection refused" event="&Event{ObjectMeta:{multinode-600000.17cd4cf9c52f26de  default    0 0001-01-01 00:00:00 +0000 UTC <nil> <nil> map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:multinode-600000,UID:multinode-600000,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:multinode-600000,},FirstTimestamp:2024-05-07 19:54:28.736796382 +0000 UTC m=+0.138302022,LastTimestamp:2024-05-07 19:54:28.736796382 +0000 UTC m=+0.138302022,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:multinode-60
0000,}"
	I0507 19:55:48.146702    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.743118    1526 server.go:455] "Adding debug handlers to kubelet server"
	I0507 19:55:48.146702    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.749839    1526 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer"
	I0507 19:55:48.147229    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.768561    1526 desired_state_of_world_populator.go:149] "Desired state populator starts to run"
	I0507 19:55:48.147270    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: W0507 19:54:28.769072    1526 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.19.135.22:8443: connect: connection refused
	I0507 19:55:48.147352    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: E0507 19:54:28.769183    1526 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.19.135.22:8443: connect: connection refused
	I0507 19:55:48.147398    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.769400    1526 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory
	I0507 19:55:48.147435    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.769456    1526 factory.go:221] Registration of the systemd container factory successfully
	I0507 19:55:48.147435    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.770894    1526 factory.go:219] Registration of the crio container factory failed: Get "http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)crio%!F(MISSING)crio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory
	I0507 19:55:48.147481    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.772962    1526 volume_manager.go:291] "Starting Kubelet Volume Manager"
	I0507 19:55:48.147556    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: E0507 19:54:28.785539    1526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-600000?timeout=10s\": dial tcp 172.19.135.22:8443: connect: connection refused" interval="200ms"
	I0507 19:55:48.147587    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.791725    1526 reconciler.go:26] "Reconciler: start to sync state"
	I0507 19:55:48.147614    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.830988    1526 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4"
	I0507 19:55:48.147671    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.840813    1526 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6"
	I0507 19:55:48.147671    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.840916    1526 status_manager.go:217] "Starting to sync pod status with apiserver"
	I0507 19:55:48.147746    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.841140    1526 kubelet.go:2337] "Starting kubelet main sync loop"
	I0507 19:55:48.147779    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: E0507 19:54:28.841245    1526 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]"
	I0507 19:55:48.147835    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: W0507 19:54:28.856981    1526 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.19.135.22:8443: connect: connection refused
	I0507 19:55:48.147835    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: E0507 19:54:28.857107    1526 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.19.135.22:8443: connect: connection refused
	I0507 19:55:48.147835    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: E0507 19:54:28.863787    1526 iptables.go:577] "Could not set up iptables canary" err=<
	I0507 19:55:48.147835    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0507 19:55:48.147835    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0507 19:55:48.147835    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0507 19:55:48.147835    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0507 19:55:48.147835    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.867313    1526 cpu_manager.go:214] "Starting CPU manager" policy="none"
	I0507 19:55:48.147835    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.867334    1526 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s"
	I0507 19:55:48.147835    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.867353    1526 state_mem.go:36] "Initialized new in-memory state store"
	I0507 19:55:48.147835    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.867956    1526 state_mem.go:88] "Updated default CPUSet" cpuSet=""
	I0507 19:55:48.147835    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.867975    1526 state_mem.go:96] "Updated CPUSet assignments" assignments={}
	I0507 19:55:48.147835    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.868003    1526 policy_none.go:49] "None policy: Start"
	I0507 19:55:48.147835    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.868488    1526 kubelet_node_status.go:73] "Attempting to register node" node="multinode-600000"
	I0507 19:55:48.147835    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: E0507 19:54:28.869266    1526 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.19.135.22:8443: connect: connection refused" node="multinode-600000"
	I0507 19:55:48.147835    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.874219    1526 memory_manager.go:170] "Starting memorymanager" policy="None"
	I0507 19:55:48.147835    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.874241    1526 state_mem.go:35] "Initializing new in-memory state store"
	I0507 19:55:48.147835    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.875298    1526 state_mem.go:75] "Updated machine memory state"
	I0507 19:55:48.147835    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.878167    1526 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found"
	I0507 19:55:48.147835    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.878458    1526 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s"
	I0507 19:55:48.147835    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.880352    1526 plugin_manager.go:118] "Starting Kubelet Plugin Manager"
	I0507 19:55:48.147835    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: E0507 19:54:28.881798    1526 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"multinode-600000\" not found"
	I0507 19:55:48.148360    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.941803    1526 topology_manager.go:215] "Topology Admit Handler" podUID="cd9cba8f94818776ec6d8836322192b3" podNamespace="kube-system" podName="kube-apiserver-multinode-600000"
	I0507 19:55:48.148396    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.944197    1526 topology_manager.go:215] "Topology Admit Handler" podUID="f5d6aa60dc93b5e562f37ed2236c3022" podNamespace="kube-system" podName="kube-controller-manager-multinode-600000"
	I0507 19:55:48.148493    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.945407    1526 topology_manager.go:215] "Topology Admit Handler" podUID="7c4ee79f6d4f6adb00b636f817445fef" podNamespace="kube-system" podName="kube-scheduler-multinode-600000"
	I0507 19:55:48.148547    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.946291    1526 topology_manager.go:215] "Topology Admit Handler" podUID="1581bf6b00d338797c8fb8b10b74abde" podNamespace="kube-system" podName="etcd-multinode-600000"
	I0507 19:55:48.148585    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.947956    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="86921e7643746441a6e93f7fb6fecdf7c7bf46b090192f2fc398129fad83dd9d"
	I0507 19:55:48.148651    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.947978    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="70cff02905e8f07315ff7e01ce388c0da3246f3c03bb7c785b3b7979a31852a9"
	I0507 19:55:48.148651    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.948141    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="58ebd877d77fb0eee19924ed195f0ccced541015095c32b9d58ab78831543622"
	I0507 19:55:48.148732    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.948156    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75f27faec2ed6996286f7030cea68f26137cea7abaedede628d29933fbde0ae9"
	I0507 19:55:48.148732    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.959165    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99af61c6e282aa13c7209e469e5e354f24968796fc455a65fdf2e8611f760994"
	I0507 19:55:48.148799    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.970524    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57950c0fdcbe4c7e6d3490c6477c947eac153e908d8e81090ef8205a050bb14c"
	I0507 19:55:48.148799    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: E0507 19:54:28.987462    1526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-600000?timeout=10s\": dial tcp 172.19.135.22:8443: connect: connection refused" interval="400ms"
	I0507 19:55:48.148887    5068 command_runner.go:130] > May 07 19:54:28 multinode-600000 kubelet[1526]: I0507 19:54:28.989236    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca0d420373470a8f3b23bd3c9b5c59f5e7c4896da57782b69f9498d3ff333fb5"
	I0507 19:55:48.148929    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 kubelet[1526]: I0507 19:54:29.000822    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4afb10dc8b11575b4eaa25a6b283141c6e029c9b44d3db3a69e4c934171b778e"
	I0507 19:55:48.148953    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 kubelet[1526]: I0507 19:54:29.010098    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cd9cba8f94818776ec6d8836322192b3-k8s-certs\") pod \"kube-apiserver-multinode-600000\" (UID: \"cd9cba8f94818776ec6d8836322192b3\") " pod="kube-system/kube-apiserver-multinode-600000"
	I0507 19:55:48.148953    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 kubelet[1526]: I0507 19:54:29.010146    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f5d6aa60dc93b5e562f37ed2236c3022-flexvolume-dir\") pod \"kube-controller-manager-multinode-600000\" (UID: \"f5d6aa60dc93b5e562f37ed2236c3022\") " pod="kube-system/kube-controller-manager-multinode-600000"
	I0507 19:55:48.149013    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 kubelet[1526]: I0507 19:54:29.010167    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f5d6aa60dc93b5e562f37ed2236c3022-kubeconfig\") pod \"kube-controller-manager-multinode-600000\" (UID: \"f5d6aa60dc93b5e562f37ed2236c3022\") " pod="kube-system/kube-controller-manager-multinode-600000"
	I0507 19:55:48.149074    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 kubelet[1526]: I0507 19:54:29.010187    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7c4ee79f6d4f6adb00b636f817445fef-kubeconfig\") pod \"kube-scheduler-multinode-600000\" (UID: \"7c4ee79f6d4f6adb00b636f817445fef\") " pod="kube-system/kube-scheduler-multinode-600000"
	I0507 19:55:48.149074    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 kubelet[1526]: I0507 19:54:29.010223    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-certs\" (UniqueName: \"kubernetes.io/host-path/1581bf6b00d338797c8fb8b10b74abde-etcd-certs\") pod \"etcd-multinode-600000\" (UID: \"1581bf6b00d338797c8fb8b10b74abde\") " pod="kube-system/etcd-multinode-600000"
	I0507 19:55:48.149134    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 kubelet[1526]: I0507 19:54:29.010245    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cd9cba8f94818776ec6d8836322192b3-ca-certs\") pod \"kube-apiserver-multinode-600000\" (UID: \"cd9cba8f94818776ec6d8836322192b3\") " pod="kube-system/kube-apiserver-multinode-600000"
	I0507 19:55:48.149134    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 kubelet[1526]: I0507 19:54:29.010264    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f5d6aa60dc93b5e562f37ed2236c3022-ca-certs\") pod \"kube-controller-manager-multinode-600000\" (UID: \"f5d6aa60dc93b5e562f37ed2236c3022\") " pod="kube-system/kube-controller-manager-multinode-600000"
	I0507 19:55:48.149200    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 kubelet[1526]: I0507 19:54:29.010292    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f5d6aa60dc93b5e562f37ed2236c3022-k8s-certs\") pod \"kube-controller-manager-multinode-600000\" (UID: \"f5d6aa60dc93b5e562f37ed2236c3022\") " pod="kube-system/kube-controller-manager-multinode-600000"
	I0507 19:55:48.149263    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 kubelet[1526]: I0507 19:54:29.010323    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f5d6aa60dc93b5e562f37ed2236c3022-usr-share-ca-certificates\") pod \"kube-controller-manager-multinode-600000\" (UID: \"f5d6aa60dc93b5e562f37ed2236c3022\") " pod="kube-system/kube-controller-manager-multinode-600000"
	I0507 19:55:48.149263    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 kubelet[1526]: I0507 19:54:29.010365    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-data\" (UniqueName: \"kubernetes.io/host-path/1581bf6b00d338797c8fb8b10b74abde-etcd-data\") pod \"etcd-multinode-600000\" (UID: \"1581bf6b00d338797c8fb8b10b74abde\") " pod="kube-system/etcd-multinode-600000"
	I0507 19:55:48.149263    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 kubelet[1526]: I0507 19:54:29.010413    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cd9cba8f94818776ec6d8836322192b3-usr-share-ca-certificates\") pod \"kube-apiserver-multinode-600000\" (UID: \"cd9cba8f94818776ec6d8836322192b3\") " pod="kube-system/kube-apiserver-multinode-600000"
	I0507 19:55:48.149357    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 kubelet[1526]: I0507 19:54:29.013343    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af16a92d7c1cc8f0246bdad95c9e580f729470ea118e03dce721c77127d06f56"
	I0507 19:55:48.149357    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 kubelet[1526]: I0507 19:54:29.071582    1526 kubelet_node_status.go:73] "Attempting to register node" node="multinode-600000"
	I0507 19:55:48.149357    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 kubelet[1526]: E0507 19:54:29.072513    1526 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.19.135.22:8443: connect: connection refused" node="multinode-600000"
	I0507 19:55:48.149357    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 kubelet[1526]: E0507 19:54:29.389792    1526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-600000?timeout=10s\": dial tcp 172.19.135.22:8443: connect: connection refused" interval="800ms"
	I0507 19:55:48.149452    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 kubelet[1526]: I0507 19:54:29.474674    1526 kubelet_node_status.go:73] "Attempting to register node" node="multinode-600000"
	I0507 19:55:48.149452    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 kubelet[1526]: E0507 19:54:29.475643    1526 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.19.135.22:8443: connect: connection refused" node="multinode-600000"
	I0507 19:55:48.149523    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 kubelet[1526]: W0507 19:54:29.564966    1526 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.19.135.22:8443: connect: connection refused
	I0507 19:55:48.149523    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 kubelet[1526]: E0507 19:54:29.565028    1526 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.19.135.22:8443: connect: connection refused
	I0507 19:55:48.149585    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 kubelet[1526]: W0507 19:54:29.712836    1526 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.19.135.22:8443: connect: connection refused
	I0507 19:55:48.149585    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 kubelet[1526]: E0507 19:54:29.712892    1526 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://control-plane.minikube.internal:8443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.19.135.22:8443: connect: connection refused
	I0507 19:55:48.149665    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 kubelet[1526]: W0507 19:54:29.898338    1526 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.19.135.22:8443: connect: connection refused
	I0507 19:55:48.149684    5068 command_runner.go:130] > May 07 19:54:29 multinode-600000 kubelet[1526]: E0507 19:54:29.898478    1526 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.19.135.22:8443: connect: connection refused
	I0507 19:55:48.149684    5068 command_runner.go:130] > May 07 19:54:30 multinode-600000 kubelet[1526]: W0507 19:54:30.187733    1526 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-600000&limit=500&resourceVersion=0": dial tcp 172.19.135.22:8443: connect: connection refused
	I0507 19:55:48.149746    5068 command_runner.go:130] > May 07 19:54:30 multinode-600000 kubelet[1526]: E0507 19:54:30.187857    1526 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)multinode-600000&limit=500&resourceVersion=0": dial tcp 172.19.135.22:8443: connect: connection refused
	I0507 19:55:48.149746    5068 command_runner.go:130] > May 07 19:54:30 multinode-600000 kubelet[1526]: E0507 19:54:30.195864    1526 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/multinode-600000?timeout=10s\": dial tcp 172.19.135.22:8443: connect: connection refused" interval="1.6s"
	I0507 19:55:48.149807    5068 command_runner.go:130] > May 07 19:54:30 multinode-600000 kubelet[1526]: I0507 19:54:30.277090    1526 kubelet_node_status.go:73] "Attempting to register node" node="multinode-600000"
	I0507 19:55:48.149807    5068 command_runner.go:130] > May 07 19:54:30 multinode-600000 kubelet[1526]: E0507 19:54:30.278121    1526 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 172.19.135.22:8443: connect: connection refused" node="multinode-600000"
	I0507 19:55:48.149874    5068 command_runner.go:130] > May 07 19:54:31 multinode-600000 kubelet[1526]: I0507 19:54:31.880610    1526 kubelet_node_status.go:73] "Attempting to register node" node="multinode-600000"
	I0507 19:55:48.149874    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: I0507 19:54:33.731174    1526 apiserver.go:52] "Watching apiserver"
	I0507 19:55:48.149874    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: I0507 19:54:33.747542    1526 topology_manager.go:215] "Topology Admit Handler" podUID="d067d438-f4af-42e8-930d-3423a3ac211f" podNamespace="kube-system" podName="coredns-7db6d8ff4d-5j966"
	I0507 19:55:48.149939    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: I0507 19:54:33.747825    1526 topology_manager.go:215] "Topology Admit Handler" podUID="9a39807c-6243-4aa2-86f4-8626031c80a6" podNamespace="kube-system" podName="kube-proxy-c9gw5"
	I0507 19:55:48.149939    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: I0507 19:54:33.748122    1526 topology_manager.go:215] "Topology Admit Handler" podUID="b5145a4d-38aa-426e-947f-3480e269470e" podNamespace="kube-system" podName="kindnet-zw4r9"
	I0507 19:55:48.149939    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: I0507 19:54:33.748365    1526 topology_manager.go:215] "Topology Admit Handler" podUID="90142b77-53fb-42e1-94f8-7f8a3c7765ac" podNamespace="kube-system" podName="storage-provisioner"
	I0507 19:55:48.150003    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: I0507 19:54:33.748551    1526 topology_manager.go:215] "Topology Admit Handler" podUID="d98009ce-3495-481a-86b3-7c1e9422ca5a" podNamespace="default" podName="busybox-fc5497c4f-gcqlv"
	I0507 19:55:48.150003    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: E0507 19:54:33.749095    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-gcqlv" podUID="d98009ce-3495-481a-86b3-7c1e9422ca5a"
	I0507 19:55:48.150149    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: I0507 19:54:33.750550    1526 kubelet.go:1908] "Trying to delete pod" pod="kube-system/etcd-multinode-600000" podUID="d55601ee-11f4-432c-8170-ecc4d8212782"
	I0507 19:55:48.150149    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: E0507 19:54:33.750908    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-5j966" podUID="d067d438-f4af-42e8-930d-3423a3ac211f"
	I0507 19:55:48.150149    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: I0507 19:54:33.770134    1526 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	I0507 19:55:48.150223    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: I0507 19:54:33.810065    1526 kubelet_node_status.go:112] "Node was previously registered" node="multinode-600000"
	I0507 19:55:48.150223    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: I0507 19:54:33.810163    1526 kubelet_node_status.go:76] "Successfully registered node" node="multinode-600000"
	I0507 19:55:48.150290    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: I0507 19:54:33.818444    1526 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	I0507 19:55:48.150290    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: I0507 19:54:33.819648    1526 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	I0507 19:55:48.150360    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: I0507 19:54:33.820321    1526 setters.go:580] "Node became not ready" node="multinode-600000" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-05-07T19:54:33Z","lastTransitionTime":"2024-05-07T19:54:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"}
	I0507 19:55:48.150360    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: I0507 19:54:33.837252    1526 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/etcd-multinode-600000"
	I0507 19:55:48.150427    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: I0507 19:54:33.845847    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9a39807c-6243-4aa2-86f4-8626031c80a6-lib-modules\") pod \"kube-proxy-c9gw5\" (UID: \"9a39807c-6243-4aa2-86f4-8626031c80a6\") " pod="kube-system/kube-proxy-c9gw5"
	I0507 19:55:48.150427    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: I0507 19:54:33.845991    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b5145a4d-38aa-426e-947f-3480e269470e-xtables-lock\") pod \"kindnet-zw4r9\" (UID: \"b5145a4d-38aa-426e-947f-3480e269470e\") " pod="kube-system/kindnet-zw4r9"
	I0507 19:55:48.150490    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: I0507 19:54:33.846149    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b5145a4d-38aa-426e-947f-3480e269470e-lib-modules\") pod \"kindnet-zw4r9\" (UID: \"b5145a4d-38aa-426e-947f-3480e269470e\") " pod="kube-system/kindnet-zw4r9"
	I0507 19:55:48.150556    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: I0507 19:54:33.846211    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/90142b77-53fb-42e1-94f8-7f8a3c7765ac-tmp\") pod \"storage-provisioner\" (UID: \"90142b77-53fb-42e1-94f8-7f8a3c7765ac\") " pod="kube-system/storage-provisioner"
	I0507 19:55:48.150556    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: I0507 19:54:33.846289    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/b5145a4d-38aa-426e-947f-3480e269470e-cni-cfg\") pod \"kindnet-zw4r9\" (UID: \"b5145a4d-38aa-426e-947f-3480e269470e\") " pod="kube-system/kindnet-zw4r9"
	I0507 19:55:48.150645    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: I0507 19:54:33.846373    1526 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9a39807c-6243-4aa2-86f4-8626031c80a6-xtables-lock\") pod \"kube-proxy-c9gw5\" (UID: \"9a39807c-6243-4aa2-86f4-8626031c80a6\") " pod="kube-system/kube-proxy-c9gw5"
	I0507 19:55:48.150645    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: E0507 19:54:33.846904    1526 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0507 19:55:48.150710    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: E0507 19:54:33.847130    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d067d438-f4af-42e8-930d-3423a3ac211f-config-volume podName:d067d438-f4af-42e8-930d-3423a3ac211f nodeName:}" failed. No retries permitted until 2024-05-07 19:54:34.347095993 +0000 UTC m=+5.748601633 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d067d438-f4af-42e8-930d-3423a3ac211f-config-volume") pod "coredns-7db6d8ff4d-5j966" (UID: "d067d438-f4af-42e8-930d-3423a3ac211f") : object "kube-system"/"coredns" not registered
	I0507 19:55:48.150710    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: E0507 19:54:33.887296    1526 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0507 19:55:48.150772    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: E0507 19:54:33.887405    1526 projected.go:200] Error preparing data for projected volume kube-api-access-77z75 for pod default/busybox-fc5497c4f-gcqlv: object "default"/"kube-root-ca.crt" not registered
	I0507 19:55:48.150772    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: E0507 19:54:33.887613    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d98009ce-3495-481a-86b3-7c1e9422ca5a-kube-api-access-77z75 podName:d98009ce-3495-481a-86b3-7c1e9422ca5a nodeName:}" failed. No retries permitted until 2024-05-07 19:54:34.387566082 +0000 UTC m=+5.789071722 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-77z75" (UniqueName: "kubernetes.io/projected/d98009ce-3495-481a-86b3-7c1e9422ca5a-kube-api-access-77z75") pod "busybox-fc5497c4f-gcqlv" (UID: "d98009ce-3495-481a-86b3-7c1e9422ca5a") : object "default"/"kube-root-ca.crt" not registered
	I0507 19:55:48.150849    5068 command_runner.go:130] > May 07 19:54:33 multinode-600000 kubelet[1526]: I0507 19:54:33.981303    1526 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-multinode-600000" podStartSLOduration=0.981289683 podStartE2EDuration="981.289683ms" podCreationTimestamp="2024-05-07 19:54:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-07 19:54:33.964275321 +0000 UTC m=+5.365780961" watchObservedRunningTime="2024-05-07 19:54:33.981289683 +0000 UTC m=+5.382795323"
	I0507 19:55:48.150849    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 kubelet[1526]: E0507 19:54:34.351653    1526 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0507 19:55:48.150917    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 kubelet[1526]: E0507 19:54:34.352036    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d067d438-f4af-42e8-930d-3423a3ac211f-config-volume podName:d067d438-f4af-42e8-930d-3423a3ac211f nodeName:}" failed. No retries permitted until 2024-05-07 19:54:35.352015549 +0000 UTC m=+6.753521289 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d067d438-f4af-42e8-930d-3423a3ac211f-config-volume") pod "coredns-7db6d8ff4d-5j966" (UID: "d067d438-f4af-42e8-930d-3423a3ac211f") : object "kube-system"/"coredns" not registered
	I0507 19:55:48.150983    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 kubelet[1526]: E0507 19:54:34.452926    1526 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0507 19:55:48.150983    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 kubelet[1526]: E0507 19:54:34.452966    1526 projected.go:200] Error preparing data for projected volume kube-api-access-77z75 for pod default/busybox-fc5497c4f-gcqlv: object "default"/"kube-root-ca.crt" not registered
	I0507 19:55:48.151053    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 kubelet[1526]: E0507 19:54:34.453012    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d98009ce-3495-481a-86b3-7c1e9422ca5a-kube-api-access-77z75 podName:d98009ce-3495-481a-86b3-7c1e9422ca5a nodeName:}" failed. No retries permitted until 2024-05-07 19:54:35.45299776 +0000 UTC m=+6.854503500 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-77z75" (UniqueName: "kubernetes.io/projected/d98009ce-3495-481a-86b3-7c1e9422ca5a-kube-api-access-77z75") pod "busybox-fc5497c4f-gcqlv" (UID: "d98009ce-3495-481a-86b3-7c1e9422ca5a") : object "default"/"kube-root-ca.crt" not registered
	I0507 19:55:48.151053    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 kubelet[1526]: I0507 19:54:34.661528    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="deb171c003562d2f3e3c8e1ec2fbec5ecaa700e48e277dd0cc50addf6cbb21a3"
	I0507 19:55:48.151117    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 kubelet[1526]: I0507 19:54:34.862381    1526 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4a96b44957f27b92ef21190115bc428" path="/var/lib/kubelet/pods/b4a96b44957f27b92ef21190115bc428/volumes"
	I0507 19:55:48.151117    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 kubelet[1526]: I0507 19:54:34.863294    1526 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d902475f151631231b80fe38edab39e8" path="/var/lib/kubelet/pods/d902475f151631231b80fe38edab39e8/volumes"
	I0507 19:55:48.151117    5068 command_runner.go:130] > May 07 19:54:34 multinode-600000 kubelet[1526]: I0507 19:54:34.938029    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="857f6b563091091373f72d143ed2af0ab7469cb77eb82675a7f665d172f1793a"
	I0507 19:55:48.151191    5068 command_runner.go:130] > May 07 19:54:35 multinode-600000 kubelet[1526]: I0507 19:54:35.108646    1526 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09d2fda974adf9dbabc54b3412155043fbda490a951a6b325ac66ef3e385e99d"
	I0507 19:55:48.151214    5068 command_runner.go:130] > May 07 19:54:35 multinode-600000 kubelet[1526]: I0507 19:54:35.109054    1526 kubelet.go:1908] "Trying to delete pod" pod="kube-system/kube-apiserver-multinode-600000" podUID="c2ba4e1a-3041-4395-a246-9dd28358b95a"
	I0507 19:55:48.151242    5068 command_runner.go:130] > May 07 19:54:35 multinode-600000 kubelet[1526]: I0507 19:54:35.145688    1526 kubelet.go:1913] "Deleted mirror pod because it is outdated" pod="kube-system/kube-apiserver-multinode-600000"
	I0507 19:55:48.151242    5068 command_runner.go:130] > May 07 19:54:35 multinode-600000 kubelet[1526]: E0507 19:54:35.358372    1526 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0507 19:55:48.151242    5068 command_runner.go:130] > May 07 19:54:35 multinode-600000 kubelet[1526]: E0507 19:54:35.358454    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d067d438-f4af-42e8-930d-3423a3ac211f-config-volume podName:d067d438-f4af-42e8-930d-3423a3ac211f nodeName:}" failed. No retries permitted until 2024-05-07 19:54:37.358438267 +0000 UTC m=+8.759943907 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d067d438-f4af-42e8-930d-3423a3ac211f-config-volume") pod "coredns-7db6d8ff4d-5j966" (UID: "d067d438-f4af-42e8-930d-3423a3ac211f") : object "kube-system"/"coredns" not registered
	I0507 19:55:48.151242    5068 command_runner.go:130] > May 07 19:54:35 multinode-600000 kubelet[1526]: E0507 19:54:35.459230    1526 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0507 19:55:48.151242    5068 command_runner.go:130] > May 07 19:54:35 multinode-600000 kubelet[1526]: E0507 19:54:35.459270    1526 projected.go:200] Error preparing data for projected volume kube-api-access-77z75 for pod default/busybox-fc5497c4f-gcqlv: object "default"/"kube-root-ca.crt" not registered
	I0507 19:55:48.151242    5068 command_runner.go:130] > May 07 19:54:35 multinode-600000 kubelet[1526]: E0507 19:54:35.459321    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d98009ce-3495-481a-86b3-7c1e9422ca5a-kube-api-access-77z75 podName:d98009ce-3495-481a-86b3-7c1e9422ca5a nodeName:}" failed. No retries permitted until 2024-05-07 19:54:37.459300671 +0000 UTC m=+8.860806411 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-77z75" (UniqueName: "kubernetes.io/projected/d98009ce-3495-481a-86b3-7c1e9422ca5a-kube-api-access-77z75") pod "busybox-fc5497c4f-gcqlv" (UID: "d98009ce-3495-481a-86b3-7c1e9422ca5a") : object "default"/"kube-root-ca.crt" not registered
	I0507 19:55:48.151242    5068 command_runner.go:130] > May 07 19:54:35 multinode-600000 kubelet[1526]: E0507 19:54:35.842389    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-5j966" podUID="d067d438-f4af-42e8-930d-3423a3ac211f"
	I0507 19:55:48.151242    5068 command_runner.go:130] > May 07 19:54:35 multinode-600000 kubelet[1526]: E0507 19:54:35.843885    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-gcqlv" podUID="d98009ce-3495-481a-86b3-7c1e9422ca5a"
	I0507 19:55:48.151242    5068 command_runner.go:130] > May 07 19:54:35 multinode-600000 kubelet[1526]: I0507 19:54:35.878265    1526 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-multinode-600000" podStartSLOduration=0.878244864 podStartE2EDuration="878.244864ms" podCreationTimestamp="2024-05-07 19:54:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-05-07 19:54:35.194323185 +0000 UTC m=+6.595828825" watchObservedRunningTime="2024-05-07 19:54:35.878244864 +0000 UTC m=+7.279750504"
	I0507 19:55:48.151242    5068 command_runner.go:130] > May 07 19:54:37 multinode-600000 kubelet[1526]: E0507 19:54:37.373090    1526 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0507 19:55:48.151242    5068 command_runner.go:130] > May 07 19:54:37 multinode-600000 kubelet[1526]: E0507 19:54:37.373161    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d067d438-f4af-42e8-930d-3423a3ac211f-config-volume podName:d067d438-f4af-42e8-930d-3423a3ac211f nodeName:}" failed. No retries permitted until 2024-05-07 19:54:41.373147008 +0000 UTC m=+12.774652748 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d067d438-f4af-42e8-930d-3423a3ac211f-config-volume") pod "coredns-7db6d8ff4d-5j966" (UID: "d067d438-f4af-42e8-930d-3423a3ac211f") : object "kube-system"/"coredns" not registered
	I0507 19:55:48.151242    5068 command_runner.go:130] > May 07 19:54:37 multinode-600000 kubelet[1526]: E0507 19:54:37.475199    1526 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0507 19:55:48.151242    5068 command_runner.go:130] > May 07 19:54:37 multinode-600000 kubelet[1526]: E0507 19:54:37.475408    1526 projected.go:200] Error preparing data for projected volume kube-api-access-77z75 for pod default/busybox-fc5497c4f-gcqlv: object "default"/"kube-root-ca.crt" not registered
	I0507 19:55:48.151242    5068 command_runner.go:130] > May 07 19:54:37 multinode-600000 kubelet[1526]: E0507 19:54:37.475544    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d98009ce-3495-481a-86b3-7c1e9422ca5a-kube-api-access-77z75 podName:d98009ce-3495-481a-86b3-7c1e9422ca5a nodeName:}" failed. No retries permitted until 2024-05-07 19:54:41.475519298 +0000 UTC m=+12.877025038 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-77z75" (UniqueName: "kubernetes.io/projected/d98009ce-3495-481a-86b3-7c1e9422ca5a-kube-api-access-77z75") pod "busybox-fc5497c4f-gcqlv" (UID: "d98009ce-3495-481a-86b3-7c1e9422ca5a") : object "default"/"kube-root-ca.crt" not registered
	I0507 19:55:48.151242    5068 command_runner.go:130] > May 07 19:54:37 multinode-600000 kubelet[1526]: E0507 19:54:37.842214    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-5j966" podUID="d067d438-f4af-42e8-930d-3423a3ac211f"
	I0507 19:55:48.151242    5068 command_runner.go:130] > May 07 19:54:37 multinode-600000 kubelet[1526]: E0507 19:54:37.842786    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-gcqlv" podUID="d98009ce-3495-481a-86b3-7c1e9422ca5a"
	I0507 19:55:48.151771    5068 command_runner.go:130] > May 07 19:54:39 multinode-600000 kubelet[1526]: E0507 19:54:39.842086    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-5j966" podUID="d067d438-f4af-42e8-930d-3423a3ac211f"
	I0507 19:55:48.151812    5068 command_runner.go:130] > May 07 19:54:39 multinode-600000 kubelet[1526]: E0507 19:54:39.842432    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-gcqlv" podUID="d98009ce-3495-481a-86b3-7c1e9422ca5a"
	I0507 19:55:48.151812    5068 command_runner.go:130] > May 07 19:54:41 multinode-600000 kubelet[1526]: E0507 19:54:41.418265    1526 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0507 19:55:48.151873    5068 command_runner.go:130] > May 07 19:54:41 multinode-600000 kubelet[1526]: E0507 19:54:41.418590    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d067d438-f4af-42e8-930d-3423a3ac211f-config-volume podName:d067d438-f4af-42e8-930d-3423a3ac211f nodeName:}" failed. No retries permitted until 2024-05-07 19:54:49.418553195 +0000 UTC m=+20.820058935 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d067d438-f4af-42e8-930d-3423a3ac211f-config-volume") pod "coredns-7db6d8ff4d-5j966" (UID: "d067d438-f4af-42e8-930d-3423a3ac211f") : object "kube-system"/"coredns" not registered
	I0507 19:55:48.151873    5068 command_runner.go:130] > May 07 19:54:41 multinode-600000 kubelet[1526]: E0507 19:54:41.518834    1526 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0507 19:55:48.151873    5068 command_runner.go:130] > May 07 19:54:41 multinode-600000 kubelet[1526]: E0507 19:54:41.519001    1526 projected.go:200] Error preparing data for projected volume kube-api-access-77z75 for pod default/busybox-fc5497c4f-gcqlv: object "default"/"kube-root-ca.crt" not registered
	I0507 19:55:48.151873    5068 command_runner.go:130] > May 07 19:54:41 multinode-600000 kubelet[1526]: E0507 19:54:41.519057    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d98009ce-3495-481a-86b3-7c1e9422ca5a-kube-api-access-77z75 podName:d98009ce-3495-481a-86b3-7c1e9422ca5a nodeName:}" failed. No retries permitted until 2024-05-07 19:54:49.519041878 +0000 UTC m=+20.920547618 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-77z75" (UniqueName: "kubernetes.io/projected/d98009ce-3495-481a-86b3-7c1e9422ca5a-kube-api-access-77z75") pod "busybox-fc5497c4f-gcqlv" (UID: "d98009ce-3495-481a-86b3-7c1e9422ca5a") : object "default"/"kube-root-ca.crt" not registered
	I0507 19:55:48.151873    5068 command_runner.go:130] > May 07 19:54:41 multinode-600000 kubelet[1526]: E0507 19:54:41.842245    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-5j966" podUID="d067d438-f4af-42e8-930d-3423a3ac211f"
	I0507 19:55:48.151873    5068 command_runner.go:130] > May 07 19:54:41 multinode-600000 kubelet[1526]: E0507 19:54:41.842350    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-gcqlv" podUID="d98009ce-3495-481a-86b3-7c1e9422ca5a"
	I0507 19:55:48.151873    5068 command_runner.go:130] > May 07 19:54:43 multinode-600000 kubelet[1526]: E0507 19:54:43.842034    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-5j966" podUID="d067d438-f4af-42e8-930d-3423a3ac211f"
	I0507 19:55:48.151873    5068 command_runner.go:130] > May 07 19:54:43 multinode-600000 kubelet[1526]: E0507 19:54:43.842216    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-gcqlv" podUID="d98009ce-3495-481a-86b3-7c1e9422ca5a"
	I0507 19:55:48.151873    5068 command_runner.go:130] > May 07 19:54:45 multinode-600000 kubelet[1526]: E0507 19:54:45.842657    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-5j966" podUID="d067d438-f4af-42e8-930d-3423a3ac211f"
	I0507 19:55:48.151873    5068 command_runner.go:130] > May 07 19:54:45 multinode-600000 kubelet[1526]: E0507 19:54:45.842807    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-gcqlv" podUID="d98009ce-3495-481a-86b3-7c1e9422ca5a"
	I0507 19:55:48.151873    5068 command_runner.go:130] > May 07 19:54:47 multinode-600000 kubelet[1526]: E0507 19:54:47.842575    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-5j966" podUID="d067d438-f4af-42e8-930d-3423a3ac211f"
	I0507 19:55:48.151873    5068 command_runner.go:130] > May 07 19:54:47 multinode-600000 kubelet[1526]: E0507 19:54:47.843152    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-gcqlv" podUID="d98009ce-3495-481a-86b3-7c1e9422ca5a"
	I0507 19:55:48.151873    5068 command_runner.go:130] > May 07 19:54:49 multinode-600000 kubelet[1526]: E0507 19:54:49.491796    1526 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0507 19:55:48.151873    5068 command_runner.go:130] > May 07 19:54:49 multinode-600000 kubelet[1526]: E0507 19:54:49.491989    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d067d438-f4af-42e8-930d-3423a3ac211f-config-volume podName:d067d438-f4af-42e8-930d-3423a3ac211f nodeName:}" failed. No retries permitted until 2024-05-07 19:55:05.491971903 +0000 UTC m=+36.893477643 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d067d438-f4af-42e8-930d-3423a3ac211f-config-volume") pod "coredns-7db6d8ff4d-5j966" (UID: "d067d438-f4af-42e8-930d-3423a3ac211f") : object "kube-system"/"coredns" not registered
	I0507 19:55:48.151873    5068 command_runner.go:130] > May 07 19:54:49 multinode-600000 kubelet[1526]: E0507 19:54:49.592490    1526 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0507 19:55:48.151873    5068 command_runner.go:130] > May 07 19:54:49 multinode-600000 kubelet[1526]: E0507 19:54:49.592595    1526 projected.go:200] Error preparing data for projected volume kube-api-access-77z75 for pod default/busybox-fc5497c4f-gcqlv: object "default"/"kube-root-ca.crt" not registered
	I0507 19:55:48.151873    5068 command_runner.go:130] > May 07 19:54:49 multinode-600000 kubelet[1526]: E0507 19:54:49.592653    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d98009ce-3495-481a-86b3-7c1e9422ca5a-kube-api-access-77z75 podName:d98009ce-3495-481a-86b3-7c1e9422ca5a nodeName:}" failed. No retries permitted until 2024-05-07 19:55:05.592637338 +0000 UTC m=+36.994142978 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-77z75" (UniqueName: "kubernetes.io/projected/d98009ce-3495-481a-86b3-7c1e9422ca5a-kube-api-access-77z75") pod "busybox-fc5497c4f-gcqlv" (UID: "d98009ce-3495-481a-86b3-7c1e9422ca5a") : object "default"/"kube-root-ca.crt" not registered
	I0507 19:55:48.152399    5068 command_runner.go:130] > May 07 19:54:49 multinode-600000 kubelet[1526]: E0507 19:54:49.842152    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-gcqlv" podUID="d98009ce-3495-481a-86b3-7c1e9422ca5a"
	I0507 19:55:48.152469    5068 command_runner.go:130] > May 07 19:54:49 multinode-600000 kubelet[1526]: E0507 19:54:49.842295    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-5j966" podUID="d067d438-f4af-42e8-930d-3423a3ac211f"
	I0507 19:55:48.152469    5068 command_runner.go:130] > May 07 19:54:51 multinode-600000 kubelet[1526]: E0507 19:54:51.841678    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-gcqlv" podUID="d98009ce-3495-481a-86b3-7c1e9422ca5a"
	I0507 19:55:48.152469    5068 command_runner.go:130] > May 07 19:54:51 multinode-600000 kubelet[1526]: E0507 19:54:51.841994    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-5j966" podUID="d067d438-f4af-42e8-930d-3423a3ac211f"
	I0507 19:55:48.152469    5068 command_runner.go:130] > May 07 19:54:53 multinode-600000 kubelet[1526]: E0507 19:54:53.841974    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-gcqlv" podUID="d98009ce-3495-481a-86b3-7c1e9422ca5a"
	I0507 19:55:48.152469    5068 command_runner.go:130] > May 07 19:54:53 multinode-600000 kubelet[1526]: E0507 19:54:53.842654    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-5j966" podUID="d067d438-f4af-42e8-930d-3423a3ac211f"
	I0507 19:55:48.152469    5068 command_runner.go:130] > May 07 19:54:55 multinode-600000 kubelet[1526]: E0507 19:54:55.842626    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-5j966" podUID="d067d438-f4af-42e8-930d-3423a3ac211f"
	I0507 19:55:48.152469    5068 command_runner.go:130] > May 07 19:54:55 multinode-600000 kubelet[1526]: E0507 19:54:55.842841    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-gcqlv" podUID="d98009ce-3495-481a-86b3-7c1e9422ca5a"
	I0507 19:55:48.152469    5068 command_runner.go:130] > May 07 19:54:57 multinode-600000 kubelet[1526]: E0507 19:54:57.841446    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-5j966" podUID="d067d438-f4af-42e8-930d-3423a3ac211f"
	I0507 19:55:48.152469    5068 command_runner.go:130] > May 07 19:54:57 multinode-600000 kubelet[1526]: E0507 19:54:57.842105    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-gcqlv" podUID="d98009ce-3495-481a-86b3-7c1e9422ca5a"
	I0507 19:55:48.152469    5068 command_runner.go:130] > May 07 19:54:59 multinode-600000 kubelet[1526]: E0507 19:54:59.842713    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-5j966" podUID="d067d438-f4af-42e8-930d-3423a3ac211f"
	I0507 19:55:48.152469    5068 command_runner.go:130] > May 07 19:54:59 multinode-600000 kubelet[1526]: E0507 19:54:59.842855    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-gcqlv" podUID="d98009ce-3495-481a-86b3-7c1e9422ca5a"
	I0507 19:55:48.152469    5068 command_runner.go:130] > May 07 19:55:01 multinode-600000 kubelet[1526]: E0507 19:55:01.842363    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-5j966" podUID="d067d438-f4af-42e8-930d-3423a3ac211f"
	I0507 19:55:48.152469    5068 command_runner.go:130] > May 07 19:55:01 multinode-600000 kubelet[1526]: E0507 19:55:01.842882    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-gcqlv" podUID="d98009ce-3495-481a-86b3-7c1e9422ca5a"
	I0507 19:55:48.152469    5068 command_runner.go:130] > May 07 19:55:03 multinode-600000 kubelet[1526]: E0507 19:55:03.841937    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-5j966" podUID="d067d438-f4af-42e8-930d-3423a3ac211f"
	I0507 19:55:48.152469    5068 command_runner.go:130] > May 07 19:55:03 multinode-600000 kubelet[1526]: E0507 19:55:03.841997    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-gcqlv" podUID="d98009ce-3495-481a-86b3-7c1e9422ca5a"
	I0507 19:55:48.152469    5068 command_runner.go:130] > May 07 19:55:05 multinode-600000 kubelet[1526]: I0507 19:55:05.501553    1526 scope.go:117] "RemoveContainer" containerID="232351adf489ab41e3b95183df116efc3adc75538ec9a57cef3b4ce608097033"
	I0507 19:55:48.152469    5068 command_runner.go:130] > May 07 19:55:05 multinode-600000 kubelet[1526]: I0507 19:55:05.501881    1526 scope.go:117] "RemoveContainer" containerID="d1e3e4629bc4ab52c27aca01f9ac01a28969e78a370077ee687920a51d952e19"
	I0507 19:55:48.152469    5068 command_runner.go:130] > May 07 19:55:05 multinode-600000 kubelet[1526]: E0507 19:55:05.502298    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(90142b77-53fb-42e1-94f8-7f8a3c7765ac)\"" pod="kube-system/storage-provisioner" podUID="90142b77-53fb-42e1-94f8-7f8a3c7765ac"
	I0507 19:55:48.152987    5068 command_runner.go:130] > May 07 19:55:05 multinode-600000 kubelet[1526]: E0507 19:55:05.529223    1526 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	I0507 19:55:48.152987    5068 command_runner.go:130] > May 07 19:55:05 multinode-600000 kubelet[1526]: E0507 19:55:05.529356    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d067d438-f4af-42e8-930d-3423a3ac211f-config-volume podName:d067d438-f4af-42e8-930d-3423a3ac211f nodeName:}" failed. No retries permitted until 2024-05-07 19:55:37.529338774 +0000 UTC m=+68.930844414 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d067d438-f4af-42e8-930d-3423a3ac211f-config-volume") pod "coredns-7db6d8ff4d-5j966" (UID: "d067d438-f4af-42e8-930d-3423a3ac211f") : object "kube-system"/"coredns" not registered
	I0507 19:55:48.153062    5068 command_runner.go:130] > May 07 19:55:05 multinode-600000 kubelet[1526]: E0507 19:55:05.629243    1526 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	I0507 19:55:48.153062    5068 command_runner.go:130] > May 07 19:55:05 multinode-600000 kubelet[1526]: E0507 19:55:05.629467    1526 projected.go:200] Error preparing data for projected volume kube-api-access-77z75 for pod default/busybox-fc5497c4f-gcqlv: object "default"/"kube-root-ca.crt" not registered
	I0507 19:55:48.153062    5068 command_runner.go:130] > May 07 19:55:05 multinode-600000 kubelet[1526]: E0507 19:55:05.629628    1526 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d98009ce-3495-481a-86b3-7c1e9422ca5a-kube-api-access-77z75 podName:d98009ce-3495-481a-86b3-7c1e9422ca5a nodeName:}" failed. No retries permitted until 2024-05-07 19:55:37.629609811 +0000 UTC m=+69.031115551 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-77z75" (UniqueName: "kubernetes.io/projected/d98009ce-3495-481a-86b3-7c1e9422ca5a-kube-api-access-77z75") pod "busybox-fc5497c4f-gcqlv" (UID: "d98009ce-3495-481a-86b3-7c1e9422ca5a") : object "default"/"kube-root-ca.crt" not registered
	I0507 19:55:48.153062    5068 command_runner.go:130] > May 07 19:55:05 multinode-600000 kubelet[1526]: E0507 19:55:05.842421    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-5j966" podUID="d067d438-f4af-42e8-930d-3423a3ac211f"
	I0507 19:55:48.153062    5068 command_runner.go:130] > May 07 19:55:05 multinode-600000 kubelet[1526]: E0507 19:55:05.842632    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-gcqlv" podUID="d98009ce-3495-481a-86b3-7c1e9422ca5a"
	I0507 19:55:48.153062    5068 command_runner.go:130] > May 07 19:55:07 multinode-600000 kubelet[1526]: E0507 19:55:07.843040    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-gcqlv" podUID="d98009ce-3495-481a-86b3-7c1e9422ca5a"
	I0507 19:55:48.153062    5068 command_runner.go:130] > May 07 19:55:07 multinode-600000 kubelet[1526]: E0507 19:55:07.843857    1526 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-5j966" podUID="d067d438-f4af-42e8-930d-3423a3ac211f"
	I0507 19:55:48.153062    5068 command_runner.go:130] > May 07 19:55:09 multinode-600000 kubelet[1526]: I0507 19:55:09.363617    1526 kubelet_node_status.go:497] "Fast updating node status as it just became ready"
	I0507 19:55:48.153062    5068 command_runner.go:130] > May 07 19:55:16 multinode-600000 kubelet[1526]: I0507 19:55:16.842451    1526 scope.go:117] "RemoveContainer" containerID="d1e3e4629bc4ab52c27aca01f9ac01a28969e78a370077ee687920a51d952e19"
	I0507 19:55:48.153062    5068 command_runner.go:130] > May 07 19:55:28 multinode-600000 kubelet[1526]: I0507 19:55:28.871479    1526 scope.go:117] "RemoveContainer" containerID="1ad9d594832564eb3ecbb3ab96ce2eec4cb095edf31a39c051d592ae068a9a6f"
	I0507 19:55:48.153062    5068 command_runner.go:130] > May 07 19:55:28 multinode-600000 kubelet[1526]: E0507 19:55:28.875911    1526 iptables.go:577] "Could not set up iptables canary" err=<
	I0507 19:55:48.153062    5068 command_runner.go:130] > May 07 19:55:28 multinode-600000 kubelet[1526]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	I0507 19:55:48.153062    5068 command_runner.go:130] > May 07 19:55:28 multinode-600000 kubelet[1526]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	I0507 19:55:48.153062    5068 command_runner.go:130] > May 07 19:55:28 multinode-600000 kubelet[1526]:         Perhaps ip6tables or your kernel needs to be upgraded.
	I0507 19:55:48.153062    5068 command_runner.go:130] > May 07 19:55:28 multinode-600000 kubelet[1526]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	I0507 19:55:48.153062    5068 command_runner.go:130] > May 07 19:55:28 multinode-600000 kubelet[1526]: I0507 19:55:28.916075    1526 scope.go:117] "RemoveContainer" containerID="675dcdcafeef04c4b82949c75f102ba97dda812ac3352b02e00d56d085f5d3bc"
	I0507 19:55:48.192533    5068 logs.go:123] Gathering logs for coredns [d27627c19808] ...
	I0507 19:55:48.192533    5068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d27627c19808"
	I0507 19:55:48.214948    5068 command_runner.go:130] > .:53
	I0507 19:55:48.215811    5068 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = a3820eb745a9a768a035bf81145ae0754aeb40457ffd5109db8c64dac842ada6c2edf6f9e6a410714e0f5cbc9cd90cb925a2fb37599adf58a40dc1bc5fa339b9
	I0507 19:55:48.215811    5068 command_runner.go:130] > CoreDNS-1.11.1
	I0507 19:55:48.215888    5068 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0507 19:55:48.215921    5068 command_runner.go:130] > [INFO] 127.0.0.1:50649 - 62527 "HINFO IN 8322179340745765625.4555534598598098973. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.052335947s
	I0507 19:55:48.217643    5068 logs.go:123] Gathering logs for kube-proxy [aa9692c1fbd3] ...
	I0507 19:55:48.217643    5068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 aa9692c1fbd3"
	I0507 19:55:48.243528    5068 command_runner.go:130] ! I0507 19:33:59.788332       1 server_linux.go:69] "Using iptables proxy"
	I0507 19:55:48.243528    5068 command_runner.go:130] ! I0507 19:33:59.819474       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.19.143.74"]
	I0507 19:55:48.243528    5068 command_runner.go:130] ! I0507 19:33:59.872130       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0507 19:55:48.243528    5068 command_runner.go:130] ! I0507 19:33:59.872292       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0507 19:55:48.243528    5068 command_runner.go:130] ! I0507 19:33:59.872320       1 server_linux.go:165] "Using iptables Proxier"
	I0507 19:55:48.244240    5068 command_runner.go:130] ! I0507 19:33:59.878610       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0507 19:55:48.244240    5068 command_runner.go:130] ! I0507 19:33:59.879634       1 server.go:872] "Version info" version="v1.30.0"
	I0507 19:55:48.244240    5068 command_runner.go:130] ! I0507 19:33:59.879774       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0507 19:55:48.244240    5068 command_runner.go:130] ! I0507 19:33:59.883100       1 config.go:192] "Starting service config controller"
	I0507 19:55:48.244240    5068 command_runner.go:130] ! I0507 19:33:59.884238       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0507 19:55:48.244240    5068 command_runner.go:130] ! I0507 19:33:59.884310       1 config.go:101] "Starting endpoint slice config controller"
	I0507 19:55:48.244240    5068 command_runner.go:130] ! I0507 19:33:59.884544       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0507 19:55:48.244240    5068 command_runner.go:130] ! I0507 19:33:59.886801       1 config.go:319] "Starting node config controller"
	I0507 19:55:48.244240    5068 command_runner.go:130] ! I0507 19:33:59.888528       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0507 19:55:48.244240    5068 command_runner.go:130] ! I0507 19:33:59.985346       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0507 19:55:48.244240    5068 command_runner.go:130] ! I0507 19:33:59.985458       1 shared_informer.go:320] Caches are synced for service config
	I0507 19:55:48.244240    5068 command_runner.go:130] ! I0507 19:33:59.988897       1 shared_informer.go:320] Caches are synced for node config
	I0507 19:55:48.246651    5068 logs.go:123] Gathering logs for kindnet [29b5cae0b8f1] ...
	I0507 19:55:48.246651    5068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 29b5cae0b8f1"
	I0507 19:55:48.268473    5068 command_runner.go:130] ! I0507 19:54:35.653367       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0507 19:55:48.268473    5068 command_runner.go:130] ! I0507 19:54:35.653969       1 main.go:107] hostIP = 172.19.135.22
	I0507 19:55:48.268473    5068 command_runner.go:130] ! podIP = 172.19.143.74
	I0507 19:55:48.268473    5068 command_runner.go:130] ! W0507 19:54:35.653976       1 main.go:109] hostIP(= "172.19.135.22") != podIP(= "172.19.143.74") but must be running with host network: 
	I0507 19:55:48.268473    5068 command_runner.go:130] ! I0507 19:54:35.655401       1 main.go:116] setting mtu 1500 for CNI 
	I0507 19:55:48.268473    5068 command_runner.go:130] ! I0507 19:54:35.655532       1 main.go:146] kindnetd IP family: "ipv4"
	I0507 19:55:48.268473    5068 command_runner.go:130] ! I0507 19:54:35.655617       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0507 19:55:48.268473    5068 command_runner.go:130] ! I0507 19:55:05.983217       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: i/o timeout
	I0507 19:55:48.268473    5068 command_runner.go:130] ! I0507 19:55:06.001182       1 main.go:223] Handling node with IPs: map[172.19.135.22:{}]
	I0507 19:55:48.268473    5068 command_runner.go:130] ! I0507 19:55:06.001219       1 main.go:227] handling current node
	I0507 19:55:48.268473    5068 command_runner.go:130] ! I0507 19:55:06.001493       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:48.268473    5068 command_runner.go:130] ! I0507 19:55:06.001598       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:48.268473    5068 command_runner.go:130] ! I0507 19:55:06.001955       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 172.19.143.144 Flags: [] Table: 0} 
	I0507 19:55:48.268473    5068 command_runner.go:130] ! I0507 19:55:06.036933       1 main.go:223] Handling node with IPs: map[172.19.129.4:{}]
	I0507 19:55:48.268473    5068 command_runner.go:130] ! I0507 19:55:06.037052       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.3.0/24] 
	I0507 19:55:48.268473    5068 command_runner.go:130] ! I0507 19:55:06.037122       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.3.0/24 Src: <nil> Gw: 172.19.129.4 Flags: [] Table: 0} 
	I0507 19:55:48.268473    5068 command_runner.go:130] ! I0507 19:55:16.046470       1 main.go:223] Handling node with IPs: map[172.19.135.22:{}]
	I0507 19:55:48.268473    5068 command_runner.go:130] ! I0507 19:55:16.046556       1 main.go:227] handling current node
	I0507 19:55:48.268473    5068 command_runner.go:130] ! I0507 19:55:16.046569       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:48.268473    5068 command_runner.go:130] ! I0507 19:55:16.046577       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:48.268473    5068 command_runner.go:130] ! I0507 19:55:16.046933       1 main.go:223] Handling node with IPs: map[172.19.129.4:{}]
	I0507 19:55:48.268473    5068 command_runner.go:130] ! I0507 19:55:16.046957       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.3.0/24] 
	I0507 19:55:48.268473    5068 command_runner.go:130] ! I0507 19:55:26.058109       1 main.go:223] Handling node with IPs: map[172.19.135.22:{}]
	I0507 19:55:48.268473    5068 command_runner.go:130] ! I0507 19:55:26.058254       1 main.go:227] handling current node
	I0507 19:55:48.268473    5068 command_runner.go:130] ! I0507 19:55:26.058265       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:48.268473    5068 command_runner.go:130] ! I0507 19:55:26.058271       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:48.268473    5068 command_runner.go:130] ! I0507 19:55:26.058667       1 main.go:223] Handling node with IPs: map[172.19.129.4:{}]
	I0507 19:55:48.268473    5068 command_runner.go:130] ! I0507 19:55:26.058697       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.3.0/24] 
	I0507 19:55:48.268473    5068 command_runner.go:130] ! I0507 19:55:36.070650       1 main.go:223] Handling node with IPs: map[172.19.135.22:{}]
	I0507 19:55:48.268473    5068 command_runner.go:130] ! I0507 19:55:36.070781       1 main.go:227] handling current node
	I0507 19:55:48.268473    5068 command_runner.go:130] ! I0507 19:55:36.070793       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:48.268473    5068 command_runner.go:130] ! I0507 19:55:36.070834       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:48.268473    5068 command_runner.go:130] ! I0507 19:55:36.071124       1 main.go:223] Handling node with IPs: map[172.19.129.4:{}]
	I0507 19:55:48.268473    5068 command_runner.go:130] ! I0507 19:55:36.071149       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.3.0/24] 
	I0507 19:55:48.268473    5068 command_runner.go:130] ! I0507 19:55:46.075806       1 main.go:223] Handling node with IPs: map[172.19.135.22:{}]
	I0507 19:55:48.268473    5068 command_runner.go:130] ! I0507 19:55:46.075899       1 main.go:227] handling current node
	I0507 19:55:48.268473    5068 command_runner.go:130] ! I0507 19:55:46.075910       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:55:48.268473    5068 command_runner.go:130] ! I0507 19:55:46.075917       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:55:48.268473    5068 command_runner.go:130] ! I0507 19:55:46.076305       1 main.go:223] Handling node with IPs: map[172.19.129.4:{}]
	I0507 19:55:48.268473    5068 command_runner.go:130] ! I0507 19:55:46.076331       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.3.0/24] 
	I0507 19:55:48.271306    5068 logs.go:123] Gathering logs for dmesg ...
	I0507 19:55:48.271306    5068 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 19:55:48.291769    5068 command_runner.go:130] > [May 7 19:52] You have booted with nomodeset. This means your GPU drivers are DISABLED
	I0507 19:55:48.291769    5068 command_runner.go:130] > [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	I0507 19:55:48.291769    5068 command_runner.go:130] > [  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	I0507 19:55:48.292333    5068 command_runner.go:130] > [  +0.116232] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	I0507 19:55:48.292333    5068 command_runner.go:130] > [  +0.022195] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	I0507 19:55:48.292333    5068 command_runner.go:130] > [  +0.000003] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	I0507 19:55:48.292394    5068 command_runner.go:130] > [  +0.000001] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	I0507 19:55:48.292394    5068 command_runner.go:130] > [  +0.059863] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	I0507 19:55:48.292394    5068 command_runner.go:130] > [  +0.024233] * Found PM-Timer Bug on the chipset. Due to workarounds for a bug,
	I0507 19:55:48.292455    5068 command_runner.go:130] >               * this clock source is slow. Consider trying other clock sources
	I0507 19:55:48.292455    5068 command_runner.go:130] > [May 7 19:53] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	I0507 19:55:48.292455    5068 command_runner.go:130] > [  +1.293154] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	I0507 19:55:48.292455    5068 command_runner.go:130] > [  +1.138766] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	I0507 19:55:48.292455    5068 command_runner.go:130] > [  +7.459478] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	I0507 19:55:48.292455    5068 command_runner.go:130] > [  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	I0507 19:55:48.292455    5068 command_runner.go:130] > [  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	I0507 19:55:48.292455    5068 command_runner.go:130] > [ +43.605395] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	I0507 19:55:48.292455    5068 command_runner.go:130] > [  +0.173535] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	I0507 19:55:48.292455    5068 command_runner.go:130] > [May 7 19:54] systemd-fstab-generator[975]: Ignoring "noauto" option for root device
	I0507 19:55:48.292455    5068 command_runner.go:130] > [  +0.087049] kauditd_printk_skb: 73 callbacks suppressed
	I0507 19:55:48.292455    5068 command_runner.go:130] > [  +0.469142] systemd-fstab-generator[1013]: Ignoring "noauto" option for root device
	I0507 19:55:48.292455    5068 command_runner.go:130] > [  +0.182768] systemd-fstab-generator[1025]: Ignoring "noauto" option for root device
	I0507 19:55:48.292455    5068 command_runner.go:130] > [  +0.198440] systemd-fstab-generator[1039]: Ignoring "noauto" option for root device
	I0507 19:55:48.292733    5068 command_runner.go:130] > [  +2.865339] systemd-fstab-generator[1227]: Ignoring "noauto" option for root device
	I0507 19:55:48.292733    5068 command_runner.go:130] > [  +0.189423] systemd-fstab-generator[1239]: Ignoring "noauto" option for root device
	I0507 19:55:48.292733    5068 command_runner.go:130] > [  +0.164316] systemd-fstab-generator[1251]: Ignoring "noauto" option for root device
	I0507 19:55:48.292733    5068 command_runner.go:130] > [  +0.220106] systemd-fstab-generator[1266]: Ignoring "noauto" option for root device
	I0507 19:55:48.292733    5068 command_runner.go:130] > [  +0.801286] systemd-fstab-generator[1378]: Ignoring "noauto" option for root device
	I0507 19:55:48.292733    5068 command_runner.go:130] > [  +0.081896] kauditd_printk_skb: 205 callbacks suppressed
	I0507 19:55:48.292733    5068 command_runner.go:130] > [  +3.512673] systemd-fstab-generator[1519]: Ignoring "noauto" option for root device
	I0507 19:55:48.292733    5068 command_runner.go:130] > [  +1.511112] kauditd_printk_skb: 64 callbacks suppressed
	I0507 19:55:48.292733    5068 command_runner.go:130] > [  +5.012853] kauditd_printk_skb: 25 callbacks suppressed
	I0507 19:55:48.292733    5068 command_runner.go:130] > [  +3.386216] systemd-fstab-generator[2338]: Ignoring "noauto" option for root device
	I0507 19:55:48.292733    5068 command_runner.go:130] > [  +7.924740] kauditd_printk_skb: 55 callbacks suppressed
	I0507 19:55:48.295535    5068 logs.go:123] Gathering logs for etcd [ac320a872e77] ...
	I0507 19:55:48.295582    5068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ac320a872e77"
	I0507 19:55:48.316722    5068 command_runner.go:130] ! {"level":"warn","ts":"2024-05-07T19:54:30.550295Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0507 19:55:48.316722    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.55691Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://172.19.135.22:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://172.19.135.22:2380","--initial-cluster=multinode-600000=https://172.19.135.22:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://172.19.135.22:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://172.19.135.22:2380","--name=multinode-600000","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy
-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]}
	I0507 19:55:48.317605    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.557392Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"}
	I0507 19:55:48.317686    5068 command_runner.go:130] ! {"level":"warn","ts":"2024-05-07T19:54:30.557435Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."}
	I0507 19:55:48.317686    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.557445Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://172.19.135.22:2380"]}
	I0507 19:55:48.317751    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.557477Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0507 19:55:48.317751    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.567644Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://172.19.135.22:2379"]}
	I0507 19:55:48.317882    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.569078Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"multinode-600000","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://172.19.135.22:2380"],"listen-peer-urls":["https://172.19.135.22:2380"],"advertise-client-urls":["https://172.19.135.22:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.19.135.22:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initia
l-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"}
	I0507 19:55:48.317932    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.589786Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"17.628697ms"}
	I0507 19:55:48.317932    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.62481Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	I0507 19:55:48.317965    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.649734Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"9263975694bef132","local-member-id":"aac5eb588ad33a11","commit-index":1911}
	I0507 19:55:48.318001    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.650002Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aac5eb588ad33a11 switched to configuration voters=()"}
	I0507 19:55:48.318041    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.650099Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aac5eb588ad33a11 became follower at term 2"}
	I0507 19:55:48.318041    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.650259Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft aac5eb588ad33a11 [peers: [], term: 2, commit: 1911, applied: 0, lastindex: 1911, lastterm: 2]"}
	I0507 19:55:48.318094    5068 command_runner.go:130] ! {"level":"warn","ts":"2024-05-07T19:54:30.665767Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	I0507 19:55:48.318134    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.674281Z","caller":"mvcc/kvstore.go:341","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":1115}
	I0507 19:55:48.318134    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.683184Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":1668}
	I0507 19:55:48.318180    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.694481Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	I0507 19:55:48.318180    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.704352Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"aac5eb588ad33a11","timeout":"7s"}
	I0507 19:55:48.318221    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.708328Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"aac5eb588ad33a11"}
	I0507 19:55:48.318267    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.708388Z","caller":"etcdserver/server.go:860","msg":"starting etcd server","local-member-id":"aac5eb588ad33a11","local-server-version":"3.5.12","cluster-version":"to_be_decided"}
	I0507 19:55:48.318267    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.710881Z","caller":"etcdserver/server.go:760","msg":"starting initial election tick advance","election-ticks":10}
	I0507 19:55:48.318307    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.711472Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	I0507 19:55:48.318307    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.71284Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	I0507 19:55:48.318353    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.712991Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	I0507 19:55:48.318392    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.713531Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aac5eb588ad33a11 switched to configuration voters=(12305500322378496529)"}
	I0507 19:55:48.318392    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.713649Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9263975694bef132","local-member-id":"aac5eb588ad33a11","added-peer-id":"aac5eb588ad33a11","added-peer-peer-urls":["https://172.19.143.74:2380"]}
	I0507 19:55:48.318441    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.714311Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9263975694bef132","local-member-id":"aac5eb588ad33a11","cluster-version":"3.5"}
	I0507 19:55:48.318481    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.714406Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	I0507 19:55:48.318534    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.727875Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	I0507 19:55:48.318573    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.733606Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.19.135.22:2380"}
	I0507 19:55:48.318619    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.733844Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.19.135.22:2380"}
	I0507 19:55:48.318666    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.734234Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"aac5eb588ad33a11","initial-advertise-peer-urls":["https://172.19.135.22:2380"],"listen-peer-urls":["https://172.19.135.22:2380"],"advertise-client-urls":["https://172.19.135.22:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.19.135.22:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	I0507 19:55:48.318666    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:30.735199Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	I0507 19:55:48.318723    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:32.251434Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aac5eb588ad33a11 is starting a new election at term 2"}
	I0507 19:55:48.318762    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:32.251481Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aac5eb588ad33a11 became pre-candidate at term 2"}
	I0507 19:55:48.318762    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:32.251511Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aac5eb588ad33a11 received MsgPreVoteResp from aac5eb588ad33a11 at term 2"}
	I0507 19:55:48.318762    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:32.251525Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aac5eb588ad33a11 became candidate at term 3"}
	I0507 19:55:48.318762    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:32.251534Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aac5eb588ad33a11 received MsgVoteResp from aac5eb588ad33a11 at term 3"}
	I0507 19:55:48.318824    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:32.251556Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aac5eb588ad33a11 became leader at term 3"}
	I0507 19:55:48.318824    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:32.251563Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aac5eb588ad33a11 elected leader aac5eb588ad33a11 at term 3"}
	I0507 19:55:48.318857    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:32.258987Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aac5eb588ad33a11","local-member-attributes":"{Name:multinode-600000 ClientURLs:[https://172.19.135.22:2379]}","request-path":"/0/members/aac5eb588ad33a11/attributes","cluster-id":"9263975694bef132","publish-timeout":"7s"}
	I0507 19:55:48.318902    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:32.259161Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0507 19:55:48.318902    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:32.259624Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	I0507 19:55:48.318936    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:32.259711Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	I0507 19:55:48.318936    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:32.259193Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	I0507 19:55:48.318966    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:32.263273Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.19.135.22:2379"}
	I0507 19:55:48.318966    5068 command_runner.go:130] ! {"level":"info","ts":"2024-05-07T19:54:32.265301Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	I0507 19:55:48.325938    5068 logs.go:123] Gathering logs for coredns [9550b237d8d7] ...
	I0507 19:55:48.325938    5068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 9550b237d8d7"
	I0507 19:55:48.348005    5068 command_runner.go:130] > .:53
	I0507 19:55:48.348847    5068 command_runner.go:130] > [INFO] plugin/reload: Running configuration SHA512 = a3820eb745a9a768a035bf81145ae0754aeb40457ffd5109db8c64dac842ada6c2edf6f9e6a410714e0f5cbc9cd90cb925a2fb37599adf58a40dc1bc5fa339b9
	I0507 19:55:48.348847    5068 command_runner.go:130] > CoreDNS-1.11.1
	I0507 19:55:48.348847    5068 command_runner.go:130] > linux/amd64, go1.20.7, ae2bbc2
	I0507 19:55:48.348929    5068 command_runner.go:130] > [INFO] 127.0.0.1:52654 - 36159 "HINFO IN 3626502665556373881.284047733441029162. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.030998756s
	I0507 19:55:48.348929    5068 command_runner.go:130] > [INFO] 10.244.1.2:39771 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00031622s
	I0507 19:55:48.348929    5068 command_runner.go:130] > [INFO] 10.244.1.2:55622 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.122912472s
	I0507 19:55:48.348929    5068 command_runner.go:130] > [INFO] 10.244.1.2:43817 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.066971198s
	I0507 19:55:48.348929    5068 command_runner.go:130] > [INFO] 10.244.1.2:39650 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.458807699s
	I0507 19:55:48.349031    5068 command_runner.go:130] > [INFO] 10.244.0.3:47684 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000164311s
	I0507 19:55:48.349031    5068 command_runner.go:130] > [INFO] 10.244.0.3:35317 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 140 0.00014611s
	I0507 19:55:48.349059    5068 command_runner.go:130] > [INFO] 10.244.0.3:42135 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd 60 0.000170411s
	I0507 19:55:48.349059    5068 command_runner.go:130] > [INFO] 10.244.0.3:41756 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,aa,rd,ra 140 0.000172612s
	I0507 19:55:48.349059    5068 command_runner.go:130] > [INFO] 10.244.1.2:40802 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000169011s
	I0507 19:55:48.349059    5068 command_runner.go:130] > [INFO] 10.244.1.2:55691 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.060031941s
	I0507 19:55:48.349059    5068 command_runner.go:130] > [INFO] 10.244.1.2:46687 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000212614s
	I0507 19:55:48.349059    5068 command_runner.go:130] > [INFO] 10.244.1.2:51698 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000276418s
	I0507 19:55:48.349059    5068 command_runner.go:130] > [INFO] 10.244.1.2:40943 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.014055822s
	I0507 19:55:48.349177    5068 command_runner.go:130] > [INFO] 10.244.1.2:55853 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000128309s
	I0507 19:55:48.349177    5068 command_runner.go:130] > [INFO] 10.244.1.2:34444 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000187212s
	I0507 19:55:48.349177    5068 command_runner.go:130] > [INFO] 10.244.1.2:54956 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000091106s
	I0507 19:55:48.349250    5068 command_runner.go:130] > [INFO] 10.244.0.3:37511 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00031542s
	I0507 19:55:48.349250    5068 command_runner.go:130] > [INFO] 10.244.0.3:47331 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000061304s
	I0507 19:55:48.349250    5068 command_runner.go:130] > [INFO] 10.244.0.3:36195 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000211814s
	I0507 19:55:48.349309    5068 command_runner.go:130] > [INFO] 10.244.0.3:37240 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00014531s
	I0507 19:55:48.349309    5068 command_runner.go:130] > [INFO] 10.244.0.3:56992 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.00014411s
	I0507 19:55:48.349309    5068 command_runner.go:130] > [INFO] 10.244.0.3:53922 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000127508s
	I0507 19:55:48.349369    5068 command_runner.go:130] > [INFO] 10.244.0.3:51034 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000225815s
	I0507 19:55:48.349369    5068 command_runner.go:130] > [INFO] 10.244.0.3:45123 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000130808s
	I0507 19:55:48.349369    5068 command_runner.go:130] > [INFO] 10.244.1.2:53185 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000190512s
	I0507 19:55:48.349428    5068 command_runner.go:130] > [INFO] 10.244.1.2:47331 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000056804s
	I0507 19:55:48.349428    5068 command_runner.go:130] > [INFO] 10.244.1.2:42551 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000058104s
	I0507 19:55:48.349488    5068 command_runner.go:130] > [INFO] 10.244.1.2:47860 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000057104s
	I0507 19:55:48.349488    5068 command_runner.go:130] > [INFO] 10.244.0.3:53037 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000190312s
	I0507 19:55:48.349565    5068 command_runner.go:130] > [INFO] 10.244.0.3:60613 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000143109s
	I0507 19:55:48.349565    5068 command_runner.go:130] > [INFO] 10.244.0.3:33867 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000069105s
	I0507 19:55:48.349565    5068 command_runner.go:130] > [INFO] 10.244.0.3:40289 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00014191s
	I0507 19:55:48.349654    5068 command_runner.go:130] > [INFO] 10.244.1.2:55673 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000204514s
	I0507 19:55:48.349654    5068 command_runner.go:130] > [INFO] 10.244.1.2:46474 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000132609s
	I0507 19:55:48.349654    5068 command_runner.go:130] > [INFO] 10.244.1.2:48070 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000170211s
	I0507 19:55:48.349654    5068 command_runner.go:130] > [INFO] 10.244.1.2:56147 - 5 "PTR IN 1.128.19.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000093806s
	I0507 19:55:48.349654    5068 command_runner.go:130] > [INFO] 10.244.0.3:39426 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000107507s
	I0507 19:55:48.349753    5068 command_runner.go:130] > [INFO] 10.244.0.3:42569 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000295619s
	I0507 19:55:48.349753    5068 command_runner.go:130] > [INFO] 10.244.0.3:56970 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000267917s
	I0507 19:55:48.349753    5068 command_runner.go:130] > [INFO] 10.244.0.3:55625 - 5 "PTR IN 1.128.19.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00014751s
	I0507 19:55:48.349813    5068 command_runner.go:130] > [INFO] SIGTERM: Shutting down servers then terminating
	I0507 19:55:48.349840    5068 command_runner.go:130] > [INFO] plugin/health: Going into lameduck mode for 5s
	I0507 19:55:48.351112    5068 logs.go:123] Gathering logs for kube-controller-manager [922d1e2b8745] ...
	I0507 19:55:48.351112    5068 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 922d1e2b8745"
	I0507 19:55:48.377436    5068 command_runner.go:130] ! I0507 19:54:31.703073       1 serving.go:380] Generated self-signed cert in-memory
	I0507 19:55:48.377436    5068 command_runner.go:130] ! I0507 19:54:32.356571       1 controllermanager.go:189] "Starting" version="v1.30.0"
	I0507 19:55:48.377436    5068 command_runner.go:130] ! I0507 19:54:32.356606       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0507 19:55:48.377436    5068 command_runner.go:130] ! I0507 19:54:32.361009       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0507 19:55:48.377436    5068 command_runner.go:130] ! I0507 19:54:32.362062       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0507 19:55:48.377436    5068 command_runner.go:130] ! I0507 19:54:32.362316       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0507 19:55:48.377436    5068 command_runner.go:130] ! I0507 19:54:32.362806       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0507 19:55:48.377436    5068 command_runner.go:130] ! I0507 19:54:35.660463       1 controllermanager.go:759] "Started controller" controller="serviceaccount-token-controller"
	I0507 19:55:48.377436    5068 command_runner.go:130] ! I0507 19:54:35.661512       1 shared_informer.go:313] Waiting for caches to sync for tokens
	I0507 19:55:48.377436    5068 command_runner.go:130] ! I0507 19:54:35.672846       1 controllermanager.go:759] "Started controller" controller="cronjob-controller"
	I0507 19:55:48.377436    5068 command_runner.go:130] ! I0507 19:54:35.673901       1 cronjob_controllerv2.go:139] "Starting cronjob controller v2" logger="cronjob-controller"
	I0507 19:55:48.377436    5068 command_runner.go:130] ! I0507 19:54:35.674100       1 shared_informer.go:313] Waiting for caches to sync for cronjob
	I0507 19:55:48.377436    5068 command_runner.go:130] ! I0507 19:54:35.677134       1 controllermanager.go:759] "Started controller" controller="ttl-controller"
	I0507 19:55:48.377436    5068 command_runner.go:130] ! I0507 19:54:35.677224       1 ttl_controller.go:124] "Starting TTL controller" logger="ttl-controller"
	I0507 19:55:48.377436    5068 command_runner.go:130] ! I0507 19:54:35.677646       1 shared_informer.go:313] Waiting for caches to sync for TTL
	I0507 19:55:48.377436    5068 command_runner.go:130] ! I0507 19:54:35.687463       1 controllermanager.go:759] "Started controller" controller="persistentvolume-attach-detach-controller"
	I0507 19:55:48.377436    5068 command_runner.go:130] ! I0507 19:54:35.690256       1 attach_detach_controller.go:342] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I0507 19:55:48.378445    5068 command_runner.go:130] ! I0507 19:54:35.690418       1 shared_informer.go:313] Waiting for caches to sync for attach detach
	I0507 19:55:48.378547    5068 command_runner.go:130] ! I0507 19:54:35.693293       1 controllermanager.go:759] "Started controller" controller="serviceaccount-controller"
	I0507 19:55:48.378567    5068 command_runner.go:130] ! I0507 19:54:35.693482       1 serviceaccounts_controller.go:111] "Starting service account controller" logger="serviceaccount-controller"
	I0507 19:55:48.378614    5068 command_runner.go:130] ! I0507 19:54:35.693648       1 shared_informer.go:313] Waiting for caches to sync for service account
	I0507 19:55:48.378644    5068 command_runner.go:130] ! I0507 19:54:35.705135       1 controllermanager.go:759] "Started controller" controller="garbage-collector-controller"
	I0507 19:55:48.378644    5068 command_runner.go:130] ! I0507 19:54:35.705560       1 garbagecollector.go:146] "Starting controller" logger="garbage-collector-controller" controller="garbagecollector"
	I0507 19:55:48.378683    5068 command_runner.go:130] ! I0507 19:54:35.705715       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0507 19:55:48.378745    5068 command_runner.go:130] ! I0507 19:54:35.707645       1 graph_builder.go:336] "Running" logger="garbage-collector-controller" component="GraphBuilder"
	I0507 19:55:48.378782    5068 command_runner.go:130] ! I0507 19:54:35.714544       1 controllermanager.go:759] "Started controller" controller="daemonset-controller"
	I0507 19:55:48.378814    5068 command_runner.go:130] ! I0507 19:54:35.714950       1 daemon_controller.go:289] "Starting daemon sets controller" logger="daemonset-controller"
	I0507 19:55:48.378869    5068 command_runner.go:130] ! I0507 19:54:35.714979       1 shared_informer.go:313] Waiting for caches to sync for daemon sets
	I0507 19:55:48.378869    5068 command_runner.go:130] ! I0507 19:54:35.718207       1 controllermanager.go:759] "Started controller" controller="root-ca-certificate-publisher-controller"
	I0507 19:55:48.378869    5068 command_runner.go:130] ! I0507 19:54:35.718555       1 publisher.go:102] "Starting root CA cert publisher controller" logger="root-ca-certificate-publisher-controller"
	I0507 19:55:48.378869    5068 command_runner.go:130] ! I0507 19:54:35.719592       1 shared_informer.go:313] Waiting for caches to sync for crt configmap
	I0507 19:55:48.378869    5068 command_runner.go:130] ! I0507 19:54:35.721267       1 controllermanager.go:759] "Started controller" controller="statefulset-controller"
	I0507 19:55:48.378869    5068 command_runner.go:130] ! I0507 19:54:35.722621       1 stateful_set.go:161] "Starting stateful set controller" logger="statefulset-controller"
	I0507 19:55:48.378869    5068 command_runner.go:130] ! I0507 19:54:35.722870       1 shared_informer.go:313] Waiting for caches to sync for stateful set
	I0507 19:55:48.378869    5068 command_runner.go:130] ! I0507 19:54:35.725345       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-approving-controller" name="csrapproving"
	I0507 19:55:48.378869    5068 command_runner.go:130] ! I0507 19:54:35.725516       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrapproving
	I0507 19:55:48.378869    5068 command_runner.go:130] ! I0507 19:54:35.727155       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-approving-controller"
	I0507 19:55:48.378869    5068 command_runner.go:130] ! I0507 19:54:35.732889       1 node_lifecycle_controller.go:425] "Controller will reconcile labels" logger="node-lifecycle-controller"
	I0507 19:55:48.378869    5068 command_runner.go:130] ! I0507 19:54:35.733036       1 controllermanager.go:759] "Started controller" controller="node-lifecycle-controller"
	I0507 19:55:48.378869    5068 command_runner.go:130] ! I0507 19:54:35.733340       1 node_lifecycle_controller.go:459] "Sending events to api server" logger="node-lifecycle-controller"
	I0507 19:55:48.378869    5068 command_runner.go:130] ! I0507 19:54:35.733465       1 node_lifecycle_controller.go:470] "Starting node controller" logger="node-lifecycle-controller"
	I0507 19:55:48.378869    5068 command_runner.go:130] ! I0507 19:54:35.734424       1 shared_informer.go:313] Waiting for caches to sync for taint
	I0507 19:55:48.378869    5068 command_runner.go:130] ! I0507 19:54:35.739429       1 controllermanager.go:759] "Started controller" controller="token-cleaner-controller"
	I0507 19:55:48.378869    5068 command_runner.go:130] ! I0507 19:54:35.740234       1 tokencleaner.go:112] "Starting token cleaner controller" logger="token-cleaner-controller"
	I0507 19:55:48.378869    5068 command_runner.go:130] ! I0507 19:54:35.740690       1 shared_informer.go:313] Waiting for caches to sync for token_cleaner
	I0507 19:55:48.378869    5068 command_runner.go:130] ! I0507 19:54:35.740915       1 shared_informer.go:320] Caches are synced for token_cleaner
	I0507 19:55:48.379392    5068 command_runner.go:130] ! E0507 19:54:35.758883       1 core.go:105] "Failed to start service controller" err="WARNING: no cloud provider provided, services of type LoadBalancer will fail" logger="service-lb-controller"
	I0507 19:55:48.379476    5068 command_runner.go:130] ! I0507 19:54:35.759554       1 controllermanager.go:737] "Warning: skipping controller" controller="service-lb-controller"
	I0507 19:55:48.379513    5068 command_runner.go:130] ! I0507 19:54:35.764996       1 shared_informer.go:320] Caches are synced for tokens
	I0507 19:55:48.379546    5068 command_runner.go:130] ! I0507 19:54:35.770304       1 controllermanager.go:759] "Started controller" controller="persistentvolume-expander-controller"
	I0507 19:55:48.379618    5068 command_runner.go:130] ! I0507 19:54:35.770613       1 expand_controller.go:329] "Starting expand controller" logger="persistentvolume-expander-controller"
	I0507 19:55:48.379618    5068 command_runner.go:130] ! I0507 19:54:35.771644       1 shared_informer.go:313] Waiting for caches to sync for expand
	I0507 19:55:48.379655    5068 command_runner.go:130] ! I0507 19:54:35.773532       1 controllermanager.go:759] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I0507 19:55:48.379688    5068 command_runner.go:130] ! I0507 19:54:35.773999       1 pvc_protection_controller.go:102] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I0507 19:55:48.379754    5068 command_runner.go:130] ! I0507 19:54:35.776366       1 shared_informer.go:313] Waiting for caches to sync for PVC protection
	I0507 19:55:48.379792    5068 command_runner.go:130] ! I0507 19:54:35.776291       1 controllermanager.go:759] "Started controller" controller="pod-garbage-collector-controller"
	I0507 19:55:48.379824    5068 command_runner.go:130] ! I0507 19:54:35.777049       1 gc_controller.go:101] "Starting GC controller" logger="pod-garbage-collector-controller"
	I0507 19:55:48.379854    5068 command_runner.go:130] ! I0507 19:54:35.778718       1 shared_informer.go:313] Waiting for caches to sync for GC
	I0507 19:55:48.379879    5068 command_runner.go:130] ! I0507 19:54:35.782053       1 controllermanager.go:759] "Started controller" controller="disruption-controller"
	I0507 19:55:48.379879    5068 command_runner.go:130] ! I0507 19:54:35.782295       1 disruption.go:433] "Sending events to api server." logger="disruption-controller"
	I0507 19:55:48.379879    5068 command_runner.go:130] ! I0507 19:54:35.783178       1 disruption.go:444] "Starting disruption controller" logger="disruption-controller"
	I0507 19:55:48.379879    5068 command_runner.go:130] ! I0507 19:54:35.783590       1 shared_informer.go:313] Waiting for caches to sync for disruption
	I0507 19:55:48.379879    5068 command_runner.go:130] ! I0507 19:54:35.785509       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-serving"
	I0507 19:55:48.379879    5068 command_runner.go:130] ! I0507 19:54:35.785650       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-serving
	I0507 19:55:48.379879    5068 command_runner.go:130] ! I0507 19:54:35.785771       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0507 19:55:48.379879    5068 command_runner.go:130] ! I0507 19:54:35.786304       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kubelet-client"
	I0507 19:55:48.379879    5068 command_runner.go:130] ! I0507 19:54:35.786711       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kubelet-client
	I0507 19:55:48.379879    5068 command_runner.go:130] ! I0507 19:54:35.788143       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0507 19:55:48.379879    5068 command_runner.go:130] ! I0507 19:54:35.788161       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-kube-apiserver-client"
	I0507 19:55:48.379879    5068 command_runner.go:130] ! I0507 19:54:35.788891       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-kube-apiserver-client
	I0507 19:55:48.379879    5068 command_runner.go:130] ! I0507 19:54:35.788187       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0507 19:55:48.379879    5068 command_runner.go:130] ! I0507 19:54:35.788425       1 certificate_controller.go:115] "Starting certificate controller" logger="certificatesigningrequest-signing-controller" name="csrsigning-legacy-unknown"
	I0507 19:55:48.379879    5068 command_runner.go:130] ! I0507 19:54:35.789279       1 shared_informer.go:313] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0507 19:55:48.379879    5068 command_runner.go:130] ! I0507 19:54:35.788437       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0507 19:55:48.380402    5068 command_runner.go:130] ! I0507 19:54:35.788403       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-signing-controller"
	I0507 19:55:48.380437    5068 command_runner.go:130] ! E0507 19:54:35.794689       1 core.go:274] "Failed to start cloud node lifecycle controller" err="no cloud provider provided" logger="cloud-node-lifecycle-controller"
	I0507 19:55:48.380507    5068 command_runner.go:130] ! I0507 19:54:35.794706       1 controllermanager.go:737] "Warning: skipping controller" controller="cloud-node-lifecycle-controller"
	I0507 19:55:48.380541    5068 command_runner.go:130] ! I0507 19:54:35.797181       1 controllermanager.go:759] "Started controller" controller="persistentvolume-binder-controller"
	I0507 19:55:48.380597    5068 command_runner.go:130] ! I0507 19:54:35.797390       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="resourceclaim-controller" requiredFeatureGates=["DynamicResourceAllocation"]
	I0507 19:55:48.380597    5068 command_runner.go:130] ! I0507 19:54:35.797366       1 pv_controller_base.go:313] "Starting persistent volume controller" logger="persistentvolume-binder-controller"
	I0507 19:55:48.380597    5068 command_runner.go:130] ! I0507 19:54:35.798435       1 shared_informer.go:313] Waiting for caches to sync for persistent volume
	I0507 19:55:48.380597    5068 command_runner.go:130] ! I0507 19:54:35.799150       1 controllermanager.go:759] "Started controller" controller="taint-eviction-controller"
	I0507 19:55:48.380597    5068 command_runner.go:130] ! I0507 19:54:35.799419       1 taint_eviction.go:285] "Starting" logger="taint-eviction-controller" controller="taint-eviction-controller"
	I0507 19:55:48.380597    5068 command_runner.go:130] ! I0507 19:54:35.800319       1 taint_eviction.go:291] "Sending events to api server" logger="taint-eviction-controller"
	I0507 19:55:48.380597    5068 command_runner.go:130] ! I0507 19:54:35.800396       1 shared_informer.go:313] Waiting for caches to sync for taint-eviction-controller
	I0507 19:55:48.380597    5068 command_runner.go:130] ! I0507 19:54:35.801149       1 controllermanager.go:759] "Started controller" controller="replicationcontroller-controller"
	I0507 19:55:48.380597    5068 command_runner.go:130] ! I0507 19:54:35.801340       1 replica_set.go:214] "Starting controller" logger="replicationcontroller-controller" name="replicationcontroller"
	I0507 19:55:48.380597    5068 command_runner.go:130] ! I0507 19:54:35.805459       1 shared_informer.go:313] Waiting for caches to sync for ReplicationController
	I0507 19:55:48.380597    5068 command_runner.go:130] ! I0507 19:54:35.806312       1 controllermanager.go:759] "Started controller" controller="deployment-controller"
	I0507 19:55:48.380597    5068 command_runner.go:130] ! I0507 19:54:35.806898       1 deployment_controller.go:168] "Starting controller" logger="deployment-controller" controller="deployment"
	I0507 19:55:48.380597    5068 command_runner.go:130] ! I0507 19:54:35.806915       1 shared_informer.go:313] Waiting for caches to sync for deployment
	I0507 19:55:48.380597    5068 command_runner.go:130] ! I0507 19:54:35.820458       1 controllermanager.go:759] "Started controller" controller="horizontal-pod-autoscaler-controller"
	I0507 19:55:48.380597    5068 command_runner.go:130] ! I0507 19:54:35.823993       1 horizontal.go:196] "Starting HPA controller" logger="horizontal-pod-autoscaler-controller"
	I0507 19:55:48.380597    5068 command_runner.go:130] ! I0507 19:54:35.824174       1 shared_informer.go:313] Waiting for caches to sync for HPA
	I0507 19:55:48.380597    5068 command_runner.go:130] ! I0507 19:54:45.843537       1 range_allocator.go:112] "No Secondary Service CIDR provided. Skipping filtering out secondary service addresses" logger="node-ipam-controller"
	I0507 19:55:48.380597    5068 command_runner.go:130] ! I0507 19:54:45.845601       1 controllermanager.go:759] "Started controller" controller="node-ipam-controller"
	I0507 19:55:48.380597    5068 command_runner.go:130] ! I0507 19:54:45.845839       1 node_ipam_controller.go:156] "Starting ipam controller" logger="node-ipam-controller"
	I0507 19:55:48.380597    5068 command_runner.go:130] ! I0507 19:54:45.846020       1 shared_informer.go:313] Waiting for caches to sync for node
	I0507 19:55:48.380597    5068 command_runner.go:130] ! I0507 19:54:45.856361       1 controllermanager.go:759] "Started controller" controller="persistentvolume-protection-controller"
	I0507 19:55:48.381130    5068 command_runner.go:130] ! I0507 19:54:45.856445       1 pv_protection_controller.go:78] "Starting PV protection controller" logger="persistentvolume-protection-controller"
	I0507 19:55:48.381130    5068 command_runner.go:130] ! I0507 19:54:45.856582       1 shared_informer.go:313] Waiting for caches to sync for PV protection
	I0507 19:55:48.381215    5068 command_runner.go:130] ! I0507 19:54:45.860605       1 controllermanager.go:759] "Started controller" controller="ttl-after-finished-controller"
	I0507 19:55:48.381215    5068 command_runner.go:130] ! I0507 19:54:45.861230       1 ttlafterfinished_controller.go:109] "Starting TTL after finished controller" logger="ttl-after-finished-controller"
	I0507 19:55:48.381215    5068 command_runner.go:130] ! I0507 19:54:45.861688       1 shared_informer.go:313] Waiting for caches to sync for TTL after finished
	I0507 19:55:48.381289    5068 command_runner.go:130] ! I0507 19:54:45.882679       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="serviceaccounts"
	I0507 19:55:48.381326    5068 command_runner.go:130] ! I0507 19:54:45.882882       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="jobs.batch"
	I0507 19:55:48.381363    5068 command_runner.go:130] ! I0507 19:54:45.883004       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="roles.rbac.authorization.k8s.io"
	I0507 19:55:48.381387    5068 command_runner.go:130] ! I0507 19:54:45.883100       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="csistoragecapacities.storage.k8s.io"
	I0507 19:55:48.381451    5068 command_runner.go:130] ! I0507 19:54:45.883309       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpointslices.discovery.k8s.io"
	I0507 19:55:48.381451    5068 command_runner.go:130] ! I0507 19:54:45.883768       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="podtemplates"
	I0507 19:55:48.381451    5068 command_runner.go:130] ! I0507 19:54:45.884103       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="daemonsets.apps"
	I0507 19:55:48.381531    5068 command_runner.go:130] ! I0507 19:54:45.884144       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="networkpolicies.networking.k8s.io"
	I0507 19:55:48.381605    5068 command_runner.go:130] ! I0507 19:54:45.884169       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="ingresses.networking.k8s.io"
	I0507 19:55:48.381605    5068 command_runner.go:130] ! I0507 19:54:45.884544       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="replicasets.apps"
	I0507 19:55:48.381683    5068 command_runner.go:130] ! I0507 19:54:45.884707       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="rolebindings.rbac.authorization.k8s.io"
	I0507 19:55:48.381683    5068 command_runner.go:130] ! I0507 19:54:45.884806       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="deployments.apps"
	I0507 19:55:48.381758    5068 command_runner.go:130] ! I0507 19:54:45.884934       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="poddisruptionbudgets.policy"
	I0507 19:55:48.381758    5068 command_runner.go:130] ! I0507 19:54:45.884999       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="horizontalpodautoscalers.autoscaling"
	I0507 19:55:48.381835    5068 command_runner.go:130] ! I0507 19:54:45.885027       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="leases.coordination.k8s.io"
	I0507 19:55:48.381835    5068 command_runner.go:130] ! I0507 19:54:45.885214       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="limitranges"
	I0507 19:55:48.381913    5068 command_runner.go:130] ! I0507 19:54:45.885361       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="statefulsets.apps"
	I0507 19:55:48.381913    5068 command_runner.go:130] ! I0507 19:54:45.885395       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="controllerrevisions.apps"
	I0507 19:55:48.382000    5068 command_runner.go:130] ! I0507 19:54:45.885452       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="cronjobs.batch"
	I0507 19:55:48.382000    5068 command_runner.go:130] ! I0507 19:54:45.885513       1 resource_quota_monitor.go:224] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="endpoints"
	I0507 19:55:48.382086    5068 command_runner.go:130] ! I0507 19:54:45.885658       1 controllermanager.go:759] "Started controller" controller="resourcequota-controller"
	I0507 19:55:48.382155    5068 command_runner.go:130] ! I0507 19:54:45.885798       1 resource_quota_controller.go:294] "Starting resource quota controller" logger="resourcequota-controller"
	I0507 19:55:48.382155    5068 command_runner.go:130] ! I0507 19:54:45.885854       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0507 19:55:48.382235    5068 command_runner.go:130] ! I0507 19:54:45.885875       1 resource_quota_monitor.go:305] "QuotaMonitor running" logger="resourcequota-controller"
	I0507 19:55:48.382235    5068 command_runner.go:130] ! I0507 19:54:45.888915       1 controllermanager.go:759] "Started controller" controller="replicaset-controller"
	I0507 19:55:48.382308    5068 command_runner.go:130] ! I0507 19:54:45.890326       1 replica_set.go:214] "Starting controller" logger="replicaset-controller" name="replicaset"
	I0507 19:55:48.382308    5068 command_runner.go:130] ! I0507 19:54:45.890549       1 shared_informer.go:313] Waiting for caches to sync for ReplicaSet
	I0507 19:55:48.382308    5068 command_runner.go:130] ! I0507 19:54:45.892442       1 controllermanager.go:759] "Started controller" controller="bootstrap-signer-controller"
	I0507 19:55:48.382308    5068 command_runner.go:130] ! I0507 19:54:45.892857       1 controllermanager.go:737] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I0507 19:55:48.382369    5068 command_runner.go:130] ! I0507 19:54:45.892697       1 shared_informer.go:313] Waiting for caches to sync for bootstrap_signer
	I0507 19:55:48.382369    5068 command_runner.go:130] ! I0507 19:54:45.895556       1 controllermanager.go:759] "Started controller" controller="endpointslice-controller"
	I0507 19:55:48.382369    5068 command_runner.go:130] ! I0507 19:54:45.896185       1 endpointslice_controller.go:265] "Starting endpoint slice controller" logger="endpointslice-controller"
	I0507 19:55:48.382369    5068 command_runner.go:130] ! I0507 19:54:45.896210       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice
	I0507 19:55:48.382369    5068 command_runner.go:130] ! I0507 19:54:45.898050       1 controllermanager.go:759] "Started controller" controller="endpointslice-mirroring-controller"
	I0507 19:55:48.382437    5068 command_runner.go:130] ! I0507 19:54:45.898440       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I0507 19:55:48.382437    5068 command_runner.go:130] ! I0507 19:54:45.898466       1 shared_informer.go:313] Waiting for caches to sync for endpoint_slice_mirroring
	I0507 19:55:48.382437    5068 command_runner.go:130] ! I0507 19:54:45.901016       1 controllermanager.go:759] "Started controller" controller="clusterrole-aggregation-controller"
	I0507 19:55:48.382498    5068 command_runner.go:130] ! I0507 19:54:45.901365       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller" logger="clusterrole-aggregation-controller"
	I0507 19:55:48.382498    5068 command_runner.go:130] ! I0507 19:54:45.901496       1 shared_informer.go:313] Waiting for caches to sync for ClusterRoleAggregator
	I0507 19:55:48.382498    5068 command_runner.go:130] ! I0507 19:54:45.904035       1 controllermanager.go:759] "Started controller" controller="legacy-serviceaccount-token-cleaner-controller"
	I0507 19:55:48.382567    5068 command_runner.go:130] ! I0507 19:54:45.906504       1 legacy_serviceaccount_token_cleaner.go:103] "Starting legacy service account token cleaner controller" logger="legacy-serviceaccount-token-cleaner-controller"
	I0507 19:55:48.382567    5068 command_runner.go:130] ! I0507 19:54:45.906590       1 shared_informer.go:313] Waiting for caches to sync for legacy-service-account-token-cleaner
	I0507 19:55:48.382567    5068 command_runner.go:130] ! I0507 19:54:45.936436       1 controllermanager.go:759] "Started controller" controller="validatingadmissionpolicy-status-controller"
	I0507 19:55:48.382628    5068 command_runner.go:130] ! I0507 19:54:45.936514       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0507 19:55:48.382628    5068 command_runner.go:130] ! I0507 19:54:45.936644       1 shared_informer.go:313] Waiting for caches to sync for validatingadmissionpolicy-status
	I0507 19:55:48.382628    5068 command_runner.go:130] ! I0507 19:54:45.950622       1 controllermanager.go:759] "Started controller" controller="namespace-controller"
	I0507 19:55:48.382628    5068 command_runner.go:130] ! I0507 19:54:45.950687       1 namespace_controller.go:197] "Starting namespace controller" logger="namespace-controller"
	I0507 19:55:48.382693    5068 command_runner.go:130] ! I0507 19:54:45.952156       1 shared_informer.go:313] Waiting for caches to sync for namespace
	I0507 19:55:48.382693    5068 command_runner.go:130] ! I0507 19:54:45.960379       1 controllermanager.go:759] "Started controller" controller="job-controller"
	I0507 19:55:48.382693    5068 command_runner.go:130] ! I0507 19:54:45.960563       1 controllermanager.go:711] "Controller is disabled by a feature gate" controller="storageversion-garbage-collector-controller" requiredFeatureGates=["APIServerIdentity","StorageVersionAPI"]
	I0507 19:55:48.382754    5068 command_runner.go:130] ! I0507 19:54:45.960800       1 core.go:298] "Warning: configure-cloud-routes is set, but no cloud provider specified. Will not configure cloud provider routes." logger="node-route-controller"
	I0507 19:55:48.382754    5068 command_runner.go:130] ! I0507 19:54:45.960885       1 controllermanager.go:737] "Warning: skipping controller" controller="node-route-controller"
	I0507 19:55:48.382754    5068 command_runner.go:130] ! I0507 19:54:45.960448       1 job_controller.go:224] "Starting job controller" logger="job-controller"
	I0507 19:55:48.382754    5068 command_runner.go:130] ! I0507 19:54:45.960996       1 shared_informer.go:313] Waiting for caches to sync for job
	I0507 19:55:48.382815    5068 command_runner.go:130] ! I0507 19:54:45.964056       1 controllermanager.go:759] "Started controller" controller="ephemeral-volume-controller"
	I0507 19:55:48.382815    5068 command_runner.go:130] ! I0507 19:54:45.964077       1 controller.go:170] "Starting ephemeral volume controller" logger="ephemeral-volume-controller"
	I0507 19:55:48.382815    5068 command_runner.go:130] ! I0507 19:54:45.964454       1 shared_informer.go:313] Waiting for caches to sync for ephemeral
	I0507 19:55:48.382815    5068 command_runner.go:130] ! I0507 19:54:45.967293       1 controllermanager.go:759] "Started controller" controller="endpoints-controller"
	I0507 19:55:48.382815    5068 command_runner.go:130] ! I0507 19:54:45.967699       1 endpoints_controller.go:174] "Starting endpoint controller" logger="endpoints-controller"
	I0507 19:55:48.382881    5068 command_runner.go:130] ! I0507 19:54:45.967884       1 shared_informer.go:313] Waiting for caches to sync for endpoint
	I0507 19:55:48.382881    5068 command_runner.go:130] ! I0507 19:54:45.969920       1 controllermanager.go:759] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0507 19:55:48.382881    5068 command_runner.go:130] ! I0507 19:54:45.969950       1 cleaner.go:83] "Starting CSR cleaner controller" logger="certificatesigningrequest-cleaner-controller"
	I0507 19:55:48.382881    5068 command_runner.go:130] ! I0507 19:54:45.979639       1 shared_informer.go:313] Waiting for caches to sync for resource quota
	I0507 19:55:48.382944    5068 command_runner.go:130] ! I0507 19:54:45.993084       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0507 19:55:48.382944    5068 command_runner.go:130] ! I0507 19:54:45.993911       1 shared_informer.go:320] Caches are synced for service account
	I0507 19:55:48.382944    5068 command_runner.go:130] ! I0507 19:54:46.001799       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0507 19:55:48.382944    5068 command_runner.go:130] ! I0507 19:54:46.002705       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0507 19:55:48.383007    5068 command_runner.go:130] ! I0507 19:54:46.006101       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0507 19:55:48.383007    5068 command_runner.go:130] ! I0507 19:54:46.008805       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0507 19:55:48.383007    5068 command_runner.go:130] ! I0507 19:54:46.014352       1 shared_informer.go:313] Waiting for caches to sync for garbage collector
	I0507 19:55:48.383060    5068 command_runner.go:130] ! I0507 19:54:46.021643       1 shared_informer.go:320] Caches are synced for crt configmap
	I0507 19:55:48.383060    5068 command_runner.go:130] ! I0507 19:54:46.023805       1 shared_informer.go:320] Caches are synced for stateful set
	I0507 19:55:48.383060    5068 command_runner.go:130] ! I0507 19:54:46.027827       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0507 19:55:48.383060    5068 command_runner.go:130] ! I0507 19:54:46.052799       1 shared_informer.go:320] Caches are synced for namespace
	I0507 19:55:48.383060    5068 command_runner.go:130] ! I0507 19:54:46.056820       1 shared_informer.go:320] Caches are synced for PV protection
	I0507 19:55:48.383060    5068 command_runner.go:130] ! I0507 19:54:46.062319       1 shared_informer.go:320] Caches are synced for job
	I0507 19:55:48.383060    5068 command_runner.go:130] ! I0507 19:54:46.062392       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0507 19:55:48.383139    5068 command_runner.go:130] ! I0507 19:54:46.065647       1 shared_informer.go:320] Caches are synced for ephemeral
	I0507 19:55:48.383139    5068 command_runner.go:130] ! I0507 19:54:46.068108       1 shared_informer.go:320] Caches are synced for endpoint
	I0507 19:55:48.383139    5068 command_runner.go:130] ! I0507 19:54:46.072892       1 shared_informer.go:320] Caches are synced for expand
	I0507 19:55:48.383139    5068 command_runner.go:130] ! I0507 19:54:46.075814       1 shared_informer.go:320] Caches are synced for cronjob
	I0507 19:55:48.383203    5068 command_runner.go:130] ! I0507 19:54:46.077269       1 shared_informer.go:320] Caches are synced for PVC protection
	I0507 19:55:48.383203    5068 command_runner.go:130] ! I0507 19:54:46.085427       1 shared_informer.go:320] Caches are synced for disruption
	I0507 19:55:48.383203    5068 command_runner.go:130] ! I0507 19:54:46.086039       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0507 19:55:48.383272    5068 command_runner.go:130] ! I0507 19:54:46.089158       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0507 19:55:48.383272    5068 command_runner.go:130] ! I0507 19:54:46.089172       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0507 19:55:48.383272    5068 command_runner.go:130] ! I0507 19:54:46.089394       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown
	I0507 19:55:48.383272    5068 command_runner.go:130] ! I0507 19:54:46.091216       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0507 19:55:48.383339    5068 command_runner.go:130] ! I0507 19:54:46.107002       1 shared_informer.go:320] Caches are synced for deployment
	I0507 19:55:48.383339    5068 command_runner.go:130] ! I0507 19:54:46.116997       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="25.691909ms"
	I0507 19:55:48.383339    5068 command_runner.go:130] ! I0507 19:54:46.118004       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="76.006µs"
	I0507 19:55:48.383339    5068 command_runner.go:130] ! I0507 19:54:46.123476       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="32.139964ms"
	I0507 19:55:48.383409    5068 command_runner.go:130] ! I0507 19:54:46.124362       1 shared_informer.go:320] Caches are synced for HPA
	I0507 19:55:48.383409    5068 command_runner.go:130] ! I0507 19:54:46.124468       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="121.91µs"
	I0507 19:55:48.383409    5068 command_runner.go:130] ! I0507 19:54:46.181088       1 shared_informer.go:320] Caches are synced for resource quota
	I0507 19:55:48.383409    5068 command_runner.go:130] ! I0507 19:54:46.189327       1 shared_informer.go:320] Caches are synced for resource quota
	I0507 19:55:48.383409    5068 command_runner.go:130] ! I0507 19:54:46.228301       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-600000-m02"
	I0507 19:55:48.383474    5068 command_runner.go:130] ! I0507 19:54:46.229031       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-600000-m02"
	I0507 19:55:48.383474    5068 command_runner.go:130] ! I0507 19:54:46.229515       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-600000-m02"
	I0507 19:55:48.383543    5068 command_runner.go:130] ! I0507 19:54:46.229843       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-600000\" does not exist"
	I0507 19:55:48.383543    5068 command_runner.go:130] ! I0507 19:54:46.229885       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-600000-m02\" does not exist"
	I0507 19:55:48.383543    5068 command_runner.go:130] ! I0507 19:54:46.229901       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-600000-m03\" does not exist"
	I0507 19:55:48.383543    5068 command_runner.go:130] ! I0507 19:54:46.234886       1 shared_informer.go:320] Caches are synced for taint
	I0507 19:55:48.383543    5068 command_runner.go:130] ! I0507 19:54:46.235155       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0507 19:55:48.383543    5068 command_runner.go:130] ! I0507 19:54:46.237527       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0507 19:55:48.383543    5068 command_runner.go:130] ! I0507 19:54:46.249515       1 shared_informer.go:320] Caches are synced for node
	I0507 19:55:48.383543    5068 command_runner.go:130] ! I0507 19:54:46.249660       1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller"
	I0507 19:55:48.383543    5068 command_runner.go:130] ! I0507 19:54:46.249700       1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0507 19:55:48.383543    5068 command_runner.go:130] ! I0507 19:54:46.249711       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0507 19:55:48.383543    5068 command_runner.go:130] ! I0507 19:54:46.249718       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0507 19:55:48.383543    5068 command_runner.go:130] ! I0507 19:54:46.261687       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-600000"
	I0507 19:55:48.383543    5068 command_runner.go:130] ! I0507 19:54:46.261718       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-600000-m02"
	I0507 19:55:48.383543    5068 command_runner.go:130] ! I0507 19:54:46.261950       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-600000-m03"
	I0507 19:55:48.383543    5068 command_runner.go:130] ! I0507 19:54:46.263203       1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0507 19:55:48.383543    5068 command_runner.go:130] ! I0507 19:54:46.282864       1 shared_informer.go:320] Caches are synced for GC
	I0507 19:55:48.383543    5068 command_runner.go:130] ! I0507 19:54:46.282948       1 shared_informer.go:320] Caches are synced for TTL
	I0507 19:55:48.383543    5068 command_runner.go:130] ! I0507 19:54:46.291375       1 shared_informer.go:320] Caches are synced for attach detach
	I0507 19:55:48.383543    5068 command_runner.go:130] ! I0507 19:54:46.296389       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0507 19:55:48.383543    5068 command_runner.go:130] ! I0507 19:54:46.299531       1 shared_informer.go:320] Caches are synced for persistent volume
	I0507 19:55:48.383543    5068 command_runner.go:130] ! I0507 19:54:46.301547       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0507 19:55:48.383543    5068 command_runner.go:130] ! I0507 19:54:46.315610       1 shared_informer.go:320] Caches are synced for daemon sets
	I0507 19:55:48.383543    5068 command_runner.go:130] ! I0507 19:54:46.707389       1 shared_informer.go:320] Caches are synced for garbage collector
	I0507 19:55:48.383543    5068 command_runner.go:130] ! I0507 19:54:46.707484       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0507 19:55:48.383543    5068 command_runner.go:130] ! I0507 19:54:46.714879       1 shared_informer.go:320] Caches are synced for garbage collector
	I0507 19:55:48.383543    5068 command_runner.go:130] ! I0507 19:55:09.379932       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-600000-m02"
	I0507 19:55:48.383543    5068 command_runner.go:130] ! I0507 19:55:26.356626       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.170086ms"
	I0507 19:55:48.383543    5068 command_runner.go:130] ! I0507 19:55:26.358052       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="48.002µs"
	I0507 19:55:48.383543    5068 command_runner.go:130] ! I0507 19:55:38.936045       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="86.905µs"
	I0507 19:55:48.383543    5068 command_runner.go:130] ! I0507 19:55:38.982779       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="30.443975ms"
	I0507 19:55:48.383543    5068 command_runner.go:130] ! I0507 19:55:38.983177       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="43.503µs"
	I0507 19:55:48.383543    5068 command_runner.go:130] ! I0507 19:55:39.007447       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.25642ms"
	I0507 19:55:48.383543    5068 command_runner.go:130] ! I0507 19:55:39.007824       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="337.32µs"
	I0507 19:55:50.912870    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods
	I0507 19:55:50.912870    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:50.912870    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:50.912870    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:50.917753    5068 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:55:50.918787    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:50.918787    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:50.918787    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:51 GMT
	I0507 19:55:50.918787    5068 round_trippers.go:580]     Audit-Id: a33136b8-5782-472d-93d3-6b69b421aa8b
	I0507 19:55:50.918787    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:50.918787    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:50.918787    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:50.920051    5068 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1891"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"1873","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86552 chars]
	I0507 19:55:50.924078    5068 system_pods.go:59] 12 kube-system pods found
	I0507 19:55:50.924078    5068 system_pods.go:61] "coredns-7db6d8ff4d-5j966" [d067d438-f4af-42e8-930d-3423a3ac211f] Running
	I0507 19:55:50.924078    5068 system_pods.go:61] "etcd-multinode-600000" [de6e93ee-7fd0-45cd-82eb-44edd4a2c2e3] Running
	I0507 19:55:50.924078    5068 system_pods.go:61] "kindnet-dkxzt" [aa15b7bd-3721-4ba9-91f8-8f4f800a31b0] Running
	I0507 19:55:50.924078    5068 system_pods.go:61] "kindnet-jmlw2" [cfa3d04f-9b15-4394-9404-f3ae09e9a125] Running
	I0507 19:55:50.924078    5068 system_pods.go:61] "kindnet-zw4r9" [b5145a4d-38aa-426e-947f-3480e269470e] Running
	I0507 19:55:50.924078    5068 system_pods.go:61] "kube-apiserver-multinode-600000" [4d9ace3f-e061-42ab-bb1d-3dac545f96a9] Running
	I0507 19:55:50.924078    5068 system_pods.go:61] "kube-controller-manager-multinode-600000" [b960b526-da40-480d-9a72-9ab8c7f2989a] Running
	I0507 19:55:50.924078    5068 system_pods.go:61] "kube-proxy-9fb6t" [f91cc93c-cb87-4494-9e11-b3bf74b9311d] Running
	I0507 19:55:50.924078    5068 system_pods.go:61] "kube-proxy-c9gw5" [9a39807c-6243-4aa2-86f4-8626031c80a6] Running
	I0507 19:55:50.924078    5068 system_pods.go:61] "kube-proxy-pzn8q" [f2506861-1f09-4193-b751-22a685a0b71b] Running
	I0507 19:55:50.924078    5068 system_pods.go:61] "kube-scheduler-multinode-600000" [ec3ac949-cb83-49be-a908-c93e23135ae8] Running
	I0507 19:55:50.924078    5068 system_pods.go:61] "storage-provisioner" [90142b77-53fb-42e1-94f8-7f8a3c7765ac] Running
	I0507 19:55:50.924078    5068 system_pods.go:74] duration metric: took 3.5574782s to wait for pod list to return data ...
	I0507 19:55:50.924078    5068 default_sa.go:34] waiting for default service account to be created ...
	I0507 19:55:50.924563    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/default/serviceaccounts
	I0507 19:55:50.924594    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:50.924594    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:50.924594    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:50.926716    5068 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:55:50.926716    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:50.926716    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:50.926716    5068 round_trippers.go:580]     Content-Length: 262
	I0507 19:55:50.926716    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:51 GMT
	I0507 19:55:50.926716    5068 round_trippers.go:580]     Audit-Id: e24718ba-f7dd-4d46-bc1f-252665277713
	I0507 19:55:50.926716    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:50.926716    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:50.927647    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:50.927728    5068 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1891"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"c895d506-2b91-4017-a081-00ad98764b6c","resourceVersion":"355","creationTimestamp":"2024-05-07T19:33:57Z"}}]}
	I0507 19:55:50.927975    5068 default_sa.go:45] found service account: "default"
	I0507 19:55:50.927975    5068 default_sa.go:55] duration metric: took 3.8972ms for default service account to be created ...
	I0507 19:55:50.927975    5068 system_pods.go:116] waiting for k8s-apps to be running ...
	I0507 19:55:50.928274    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods
	I0507 19:55:50.928274    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:50.928274    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:50.928274    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:50.932592    5068 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:55:50.932592    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:50.933595    5068 round_trippers.go:580]     Audit-Id: 299e691f-7eb6-4e5e-a7f3-d9fe98d34437
	I0507 19:55:50.933595    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:50.933595    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:50.933595    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:50.933595    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:50.933595    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:51 GMT
	I0507 19:55:50.933595    5068 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1891"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"1873","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86552 chars]
	I0507 19:55:50.936634    5068 system_pods.go:86] 12 kube-system pods found
	I0507 19:55:50.936634    5068 system_pods.go:89] "coredns-7db6d8ff4d-5j966" [d067d438-f4af-42e8-930d-3423a3ac211f] Running
	I0507 19:55:50.936634    5068 system_pods.go:89] "etcd-multinode-600000" [de6e93ee-7fd0-45cd-82eb-44edd4a2c2e3] Running
	I0507 19:55:50.936634    5068 system_pods.go:89] "kindnet-dkxzt" [aa15b7bd-3721-4ba9-91f8-8f4f800a31b0] Running
	I0507 19:55:50.936634    5068 system_pods.go:89] "kindnet-jmlw2" [cfa3d04f-9b15-4394-9404-f3ae09e9a125] Running
	I0507 19:55:50.936634    5068 system_pods.go:89] "kindnet-zw4r9" [b5145a4d-38aa-426e-947f-3480e269470e] Running
	I0507 19:55:50.936634    5068 system_pods.go:89] "kube-apiserver-multinode-600000" [4d9ace3f-e061-42ab-bb1d-3dac545f96a9] Running
	I0507 19:55:50.936634    5068 system_pods.go:89] "kube-controller-manager-multinode-600000" [b960b526-da40-480d-9a72-9ab8c7f2989a] Running
	I0507 19:55:50.936634    5068 system_pods.go:89] "kube-proxy-9fb6t" [f91cc93c-cb87-4494-9e11-b3bf74b9311d] Running
	I0507 19:55:50.936634    5068 system_pods.go:89] "kube-proxy-c9gw5" [9a39807c-6243-4aa2-86f4-8626031c80a6] Running
	I0507 19:55:50.936634    5068 system_pods.go:89] "kube-proxy-pzn8q" [f2506861-1f09-4193-b751-22a685a0b71b] Running
	I0507 19:55:50.936634    5068 system_pods.go:89] "kube-scheduler-multinode-600000" [ec3ac949-cb83-49be-a908-c93e23135ae8] Running
	I0507 19:55:50.936634    5068 system_pods.go:89] "storage-provisioner" [90142b77-53fb-42e1-94f8-7f8a3c7765ac] Running
	I0507 19:55:50.936634    5068 system_pods.go:126] duration metric: took 8.6583ms to wait for k8s-apps to be running ...
	I0507 19:55:50.936634    5068 system_svc.go:44] waiting for kubelet service to be running ....
	I0507 19:55:50.944475    5068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0507 19:55:50.966109    5068 system_svc.go:56] duration metric: took 29.4725ms WaitForService to wait for kubelet
	I0507 19:55:50.966109    5068 kubeadm.go:576] duration metric: took 1m12.899416s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0507 19:55:50.966109    5068 node_conditions.go:102] verifying NodePressure condition ...
	I0507 19:55:50.966109    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes
	I0507 19:55:50.966109    5068 round_trippers.go:469] Request Headers:
	I0507 19:55:50.966109    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:55:50.966109    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:55:50.969339    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:55:50.969339    5068 round_trippers.go:577] Response Headers:
	I0507 19:55:50.969339    5068 round_trippers.go:580]     Audit-Id: 2eecbef5-bb1e-4c9c-9472-9888da2747ee
	I0507 19:55:50.969339    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:55:50.969339    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:55:50.969339    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:55:50.969339    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:55:50.969339    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:55:51 GMT
	I0507 19:55:50.970291    5068 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1891"},"items":[{"metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1836","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 16257 chars]
	I0507 19:55:50.971433    5068 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0507 19:55:50.971433    5068 node_conditions.go:123] node cpu capacity is 2
	I0507 19:55:50.971504    5068 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0507 19:55:50.971504    5068 node_conditions.go:123] node cpu capacity is 2
	I0507 19:55:50.971504    5068 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0507 19:55:50.971504    5068 node_conditions.go:123] node cpu capacity is 2
	I0507 19:55:50.971504    5068 node_conditions.go:105] duration metric: took 5.3952ms to run NodePressure ...
	I0507 19:55:50.971580    5068 start.go:240] waiting for startup goroutines ...
	I0507 19:55:50.971580    5068 start.go:245] waiting for cluster config update ...
	I0507 19:55:50.971580    5068 start.go:254] writing updated cluster config ...
	I0507 19:55:50.977261    5068 out.go:177] 
	I0507 19:55:50.980214    5068 config.go:182] Loaded profile config "ha-210800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 19:55:50.986680    5068 config.go:182] Loaded profile config "multinode-600000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 19:55:50.986680    5068 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-600000\config.json ...
	I0507 19:55:50.992426    5068 out.go:177] * Starting "multinode-600000-m02" worker node in "multinode-600000" cluster
	I0507 19:55:50.994430    5068 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0507 19:55:50.994430    5068 cache.go:56] Caching tarball of preloaded images
	I0507 19:55:50.994430    5068 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0507 19:55:50.995476    5068 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0507 19:55:50.995476    5068 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-600000\config.json ...
	I0507 19:55:50.996438    5068 start.go:360] acquireMachinesLock for multinode-600000-m02: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 19:55:50.996438    5068 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-600000-m02"
	I0507 19:55:50.997434    5068 start.go:96] Skipping create...Using existing machine configuration
	I0507 19:55:50.997434    5068 fix.go:54] fixHost starting: m02
	I0507 19:55:50.997434    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000-m02 ).state
	I0507 19:55:52.937071    5068 main.go:141] libmachine: [stdout =====>] : Off
	
	I0507 19:55:52.937071    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:55:52.937071    5068 fix.go:112] recreateIfNeeded on multinode-600000-m02: state=Stopped err=<nil>
	W0507 19:55:52.937071    5068 fix.go:138] unexpected machine state, will restart: <nil>
	I0507 19:55:52.941346    5068 out.go:177] * Restarting existing hyperv VM for "multinode-600000-m02" ...
	I0507 19:55:52.945367    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-600000-m02
	I0507 19:55:55.712784    5068 main.go:141] libmachine: [stdout =====>] : 
	I0507 19:55:55.712863    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:55:55.712863    5068 main.go:141] libmachine: Waiting for host to start...
	I0507 19:55:55.712863    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000-m02 ).state
	I0507 19:55:57.730903    5068 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:55:57.730903    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:55:57.731084    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 19:55:59.941235    5068 main.go:141] libmachine: [stdout =====>] : 
	I0507 19:55:59.941235    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:56:00.953090    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000-m02 ).state
	I0507 19:56:02.968032    5068 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:56:02.968799    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:56:02.968799    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 19:56:05.252728    5068 main.go:141] libmachine: [stdout =====>] : 
	I0507 19:56:05.253423    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:56:06.265721    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000-m02 ).state
	I0507 19:56:08.230836    5068 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:56:08.231033    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:56:08.231109    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 19:56:10.490032    5068 main.go:141] libmachine: [stdout =====>] : 
	I0507 19:56:10.490600    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:56:11.499481    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000-m02 ).state
	I0507 19:56:13.462705    5068 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:56:13.463136    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:56:13.463194    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 19:56:15.690299    5068 main.go:141] libmachine: [stdout =====>] : 
	I0507 19:56:15.690370    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:56:16.693663    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000-m02 ).state
	I0507 19:56:18.687179    5068 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:56:18.687852    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:56:18.687852    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 19:56:21.046978    5068 main.go:141] libmachine: [stdout =====>] : 172.19.128.95
	
	I0507 19:56:21.046978    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:56:21.049022    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000-m02 ).state
	I0507 19:56:22.971699    5068 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:56:22.971771    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:56:22.971992    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 19:56:25.322393    5068 main.go:141] libmachine: [stdout =====>] : 172.19.128.95
	
	I0507 19:56:25.322393    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:56:25.323130    5068 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-600000\config.json ...
	I0507 19:56:25.325029    5068 machine.go:94] provisionDockerMachine start ...
	I0507 19:56:25.325029    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000-m02 ).state
	I0507 19:56:27.270956    5068 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:56:27.271005    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:56:27.271005    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 19:56:29.518626    5068 main.go:141] libmachine: [stdout =====>] : 172.19.128.95
	
	I0507 19:56:29.518678    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:56:29.522138    5068 main.go:141] libmachine: Using SSH client type: native
	I0507 19:56:29.522138    5068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.128.95 22 <nil> <nil>}
	I0507 19:56:29.522138    5068 main.go:141] libmachine: About to run SSH command:
	hostname
	I0507 19:56:29.642024    5068 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0507 19:56:29.642024    5068 buildroot.go:166] provisioning hostname "multinode-600000-m02"
	I0507 19:56:29.642024    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000-m02 ).state
	I0507 19:56:31.540551    5068 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:56:31.540551    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:56:31.540630    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 19:56:33.791056    5068 main.go:141] libmachine: [stdout =====>] : 172.19.128.95
	
	I0507 19:56:33.791519    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:56:33.797662    5068 main.go:141] libmachine: Using SSH client type: native
	I0507 19:56:33.798222    5068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.128.95 22 <nil> <nil>}
	I0507 19:56:33.798356    5068 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-600000-m02 && echo "multinode-600000-m02" | sudo tee /etc/hostname
	I0507 19:56:33.940769    5068 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-600000-m02
	
	I0507 19:56:33.940769    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000-m02 ).state
	I0507 19:56:35.871629    5068 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:56:35.872138    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:56:35.872138    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 19:56:38.161315    5068 main.go:141] libmachine: [stdout =====>] : 172.19.128.95
	
	I0507 19:56:38.161315    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:56:38.165457    5068 main.go:141] libmachine: Using SSH client type: native
	I0507 19:56:38.165808    5068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.128.95 22 <nil> <nil>}
	I0507 19:56:38.165808    5068 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-600000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-600000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-600000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0507 19:56:38.302490    5068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0507 19:56:38.302490    5068 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0507 19:56:38.302490    5068 buildroot.go:174] setting up certificates
	I0507 19:56:38.302490    5068 provision.go:84] configureAuth start
	I0507 19:56:38.302490    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000-m02 ).state
	I0507 19:56:40.220991    5068 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:56:40.221350    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:56:40.221486    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 19:56:42.487136    5068 main.go:141] libmachine: [stdout =====>] : 172.19.128.95
	
	I0507 19:56:42.487136    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:56:42.487136    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000-m02 ).state
	I0507 19:56:44.395700    5068 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:56:44.395751    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:56:44.395751    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 19:56:46.712120    5068 main.go:141] libmachine: [stdout =====>] : 172.19.128.95
	
	I0507 19:56:46.712120    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:56:46.712120    5068 provision.go:143] copyHostCerts
	I0507 19:56:46.712271    5068 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0507 19:56:46.712482    5068 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0507 19:56:46.712482    5068 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0507 19:56:46.712482    5068 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0507 19:56:46.713615    5068 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0507 19:56:46.713724    5068 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0507 19:56:46.713724    5068 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0507 19:56:46.713724    5068 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0507 19:56:46.714409    5068 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0507 19:56:46.714409    5068 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0507 19:56:46.714939    5068 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0507 19:56:46.715100    5068 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0507 19:56:46.715999    5068 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-600000-m02 san=[127.0.0.1 172.19.128.95 localhost minikube multinode-600000-m02]
	I0507 19:56:46.941544    5068 provision.go:177] copyRemoteCerts
	I0507 19:56:46.948542    5068 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0507 19:56:46.949598    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000-m02 ).state
	I0507 19:56:48.897143    5068 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:56:48.897512    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:56:48.897512    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 19:56:51.186940    5068 main.go:141] libmachine: [stdout =====>] : 172.19.128.95
	
	I0507 19:56:51.186940    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:56:51.187330    5068 sshutil.go:53] new ssh client: &{IP:172.19.128.95 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-600000-m02\id_rsa Username:docker}
	I0507 19:56:51.281894    5068 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.3320092s)
	I0507 19:56:51.282010    5068 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0507 19:56:51.282010    5068 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0507 19:56:51.328000    5068 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0507 19:56:51.328000    5068 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0507 19:56:51.371684    5068 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0507 19:56:51.371957    5068 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0507 19:56:51.414123    5068 provision.go:87] duration metric: took 13.1107039s to configureAuth
	I0507 19:56:51.414348    5068 buildroot.go:189] setting minikube options for container-runtime
	I0507 19:56:51.414848    5068 config.go:182] Loaded profile config "multinode-600000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 19:56:51.414924    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000-m02 ).state
	I0507 19:56:53.309342    5068 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:56:53.309472    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:56:53.309472    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 19:56:55.566251    5068 main.go:141] libmachine: [stdout =====>] : 172.19.128.95
	
	I0507 19:56:55.566251    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:56:55.570649    5068 main.go:141] libmachine: Using SSH client type: native
	I0507 19:56:55.570949    5068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.128.95 22 <nil> <nil>}
	I0507 19:56:55.570949    5068 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0507 19:56:55.697206    5068 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0507 19:56:55.697206    5068 buildroot.go:70] root file system type: tmpfs
	I0507 19:56:55.697206    5068 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0507 19:56:55.697206    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000-m02 ).state
	I0507 19:56:57.539283    5068 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:56:57.540096    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:56:57.540096    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 19:56:59.802405    5068 main.go:141] libmachine: [stdout =====>] : 172.19.128.95
	
	I0507 19:56:59.802405    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:56:59.806696    5068 main.go:141] libmachine: Using SSH client type: native
	I0507 19:56:59.807074    5068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.128.95 22 <nil> <nil>}
	I0507 19:56:59.807074    5068 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.19.135.22"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0507 19:56:59.952615    5068 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.19.135.22
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0507 19:56:59.952615    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000-m02 ).state
	I0507 19:57:01.875415    5068 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:57:01.875415    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:57:01.875415    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 19:57:04.092689    5068 main.go:141] libmachine: [stdout =====>] : 172.19.128.95
	
	I0507 19:57:04.093521    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:57:04.097609    5068 main.go:141] libmachine: Using SSH client type: native
	I0507 19:57:04.097609    5068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.128.95 22 <nil> <nil>}
	I0507 19:57:04.098131    5068 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0507 19:57:06.320076    5068 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0507 19:57:06.320076    5068 machine.go:97] duration metric: took 40.9923395s to provisionDockerMachine
	I0507 19:57:06.320076    5068 start.go:293] postStartSetup for "multinode-600000-m02" (driver="hyperv")
	I0507 19:57:06.320076    5068 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0507 19:57:06.329476    5068 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0507 19:57:06.329552    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000-m02 ).state
	I0507 19:57:08.216026    5068 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:57:08.216500    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:57:08.216500    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 19:57:10.480333    5068 main.go:141] libmachine: [stdout =====>] : 172.19.128.95
	
	I0507 19:57:10.481225    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:57:10.481580    5068 sshutil.go:53] new ssh client: &{IP:172.19.128.95 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-600000-m02\id_rsa Username:docker}
	I0507 19:57:10.582484    5068 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.252725s)
	I0507 19:57:10.592059    5068 ssh_runner.go:195] Run: cat /etc/os-release
	I0507 19:57:10.598077    5068 command_runner.go:130] > NAME=Buildroot
	I0507 19:57:10.598281    5068 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0507 19:57:10.598281    5068 command_runner.go:130] > ID=buildroot
	I0507 19:57:10.598281    5068 command_runner.go:130] > VERSION_ID=2023.02.9
	I0507 19:57:10.598281    5068 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0507 19:57:10.598610    5068 info.go:137] Remote host: Buildroot 2023.02.9
	I0507 19:57:10.598673    5068 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0507 19:57:10.598673    5068 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0507 19:57:10.599524    5068 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem -> 99922.pem in /etc/ssl/certs
	I0507 19:57:10.599600    5068 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem -> /etc/ssl/certs/99922.pem
	I0507 19:57:10.607561    5068 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0507 19:57:10.625159    5068 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem --> /etc/ssl/certs/99922.pem (1708 bytes)
	I0507 19:57:10.668840    5068 start.go:296] duration metric: took 4.3484743s for postStartSetup
	I0507 19:57:10.668921    5068 fix.go:56] duration metric: took 1m19.6662571s for fixHost
	I0507 19:57:10.668982    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000-m02 ).state
	I0507 19:57:12.538447    5068 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:57:12.538447    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:57:12.538653    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 19:57:14.767031    5068 main.go:141] libmachine: [stdout =====>] : 172.19.128.95
	
	I0507 19:57:14.767609    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:57:14.771232    5068 main.go:141] libmachine: Using SSH client type: native
	I0507 19:57:14.771856    5068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.128.95 22 <nil> <nil>}
	I0507 19:57:14.771856    5068 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0507 19:57:14.895877    5068 main.go:141] libmachine: SSH cmd err, output: <nil>: 1715111835.132144867
	
	I0507 19:57:14.896044    5068 fix.go:216] guest clock: 1715111835.132144867
	I0507 19:57:14.896044    5068 fix.go:229] Guest: 2024-05-07 19:57:15.132144867 +0000 UTC Remote: 2024-05-07 19:57:10.6689218 +0000 UTC m=+273.511821801 (delta=4.463223067s)
	I0507 19:57:14.896213    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000-m02 ).state
	I0507 19:57:16.815455    5068 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:57:16.815455    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:57:16.815455    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 19:57:19.118576    5068 main.go:141] libmachine: [stdout =====>] : 172.19.128.95
	
	I0507 19:57:19.118609    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:57:19.122245    5068 main.go:141] libmachine: Using SSH client type: native
	I0507 19:57:19.122513    5068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.128.95 22 <nil> <nil>}
	I0507 19:57:19.122513    5068 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1715111834
	I0507 19:57:19.257154    5068 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue May  7 19:57:14 UTC 2024
	
	I0507 19:57:19.257154    5068 fix.go:236] clock set: Tue May  7 19:57:14 UTC 2024
	 (err=<nil>)
	I0507 19:57:19.257154    5068 start.go:83] releasing machines lock for "multinode-600000-m02", held for 1m28.2539167s
	I0507 19:57:19.257363    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000-m02 ).state
	I0507 19:57:21.175274    5068 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:57:21.175274    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:57:21.175399    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 19:57:23.445057    5068 main.go:141] libmachine: [stdout =====>] : 172.19.128.95
	
	I0507 19:57:23.445956    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:57:23.448941    5068 out.go:177] * Found network options:
	I0507 19:57:23.451841    5068 out.go:177]   - NO_PROXY=172.19.135.22
	W0507 19:57:23.456606    5068 proxy.go:119] fail to check proxy env: Error ip not in block
	I0507 19:57:23.459108    5068 out.go:177]   - NO_PROXY=172.19.135.22
	W0507 19:57:23.461435    5068 proxy.go:119] fail to check proxy env: Error ip not in block
	W0507 19:57:23.462759    5068 proxy.go:119] fail to check proxy env: Error ip not in block
	I0507 19:57:23.465080    5068 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0507 19:57:23.465181    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000-m02 ).state
	I0507 19:57:23.476240    5068 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0507 19:57:23.476240    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000-m02 ).state
	I0507 19:57:25.417957    5068 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:57:25.417957    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:57:25.418859    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 19:57:25.431771    5068 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:57:25.431771    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:57:25.431771    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 19:57:27.777625    5068 main.go:141] libmachine: [stdout =====>] : 172.19.128.95
	
	I0507 19:57:27.777972    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:57:27.778117    5068 sshutil.go:53] new ssh client: &{IP:172.19.128.95 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-600000-m02\id_rsa Username:docker}
	I0507 19:57:27.796171    5068 main.go:141] libmachine: [stdout =====>] : 172.19.128.95
	
	I0507 19:57:27.796605    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:57:27.796641    5068 sshutil.go:53] new ssh client: &{IP:172.19.128.95 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-600000-m02\id_rsa Username:docker}
	I0507 19:57:27.919984    5068 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0507 19:57:27.919984    5068 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.4546052s)
	I0507 19:57:27.919984    5068 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0507 19:57:27.919984    5068 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.4434464s)
	W0507 19:57:27.919984    5068 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0507 19:57:27.930440    5068 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0507 19:57:27.957337    5068 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0507 19:57:27.957337    5068 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0507 19:57:27.957337    5068 start.go:494] detecting cgroup driver to use...
	I0507 19:57:27.957952    5068 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0507 19:57:27.989629    5068 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0507 19:57:27.998206    5068 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0507 19:57:28.025055    5068 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0507 19:57:28.042333    5068 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0507 19:57:28.051920    5068 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0507 19:57:28.077561    5068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0507 19:57:28.105307    5068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0507 19:57:28.131489    5068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0507 19:57:28.164219    5068 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0507 19:57:28.199815    5068 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0507 19:57:28.227728    5068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0507 19:57:28.254370    5068 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0507 19:57:28.281366    5068 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0507 19:57:28.298803    5068 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0507 19:57:28.307574    5068 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0507 19:57:28.334583    5068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 19:57:28.512862    5068 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0507 19:57:28.543745    5068 start.go:494] detecting cgroup driver to use...
	I0507 19:57:28.552207    5068 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0507 19:57:28.573040    5068 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0507 19:57:28.573336    5068 command_runner.go:130] > [Unit]
	I0507 19:57:28.573336    5068 command_runner.go:130] > Description=Docker Application Container Engine
	I0507 19:57:28.573336    5068 command_runner.go:130] > Documentation=https://docs.docker.com
	I0507 19:57:28.573336    5068 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0507 19:57:28.573379    5068 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0507 19:57:28.573379    5068 command_runner.go:130] > StartLimitBurst=3
	I0507 19:57:28.573379    5068 command_runner.go:130] > StartLimitIntervalSec=60
	I0507 19:57:28.573379    5068 command_runner.go:130] > [Service]
	I0507 19:57:28.573379    5068 command_runner.go:130] > Type=notify
	I0507 19:57:28.573379    5068 command_runner.go:130] > Restart=on-failure
	I0507 19:57:28.573379    5068 command_runner.go:130] > Environment=NO_PROXY=172.19.135.22
	I0507 19:57:28.573522    5068 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0507 19:57:28.573522    5068 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0507 19:57:28.573522    5068 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0507 19:57:28.573522    5068 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0507 19:57:28.573522    5068 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0507 19:57:28.573522    5068 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0507 19:57:28.573522    5068 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0507 19:57:28.573522    5068 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0507 19:57:28.573522    5068 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0507 19:57:28.573522    5068 command_runner.go:130] > ExecStart=
	I0507 19:57:28.573522    5068 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0507 19:57:28.573522    5068 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0507 19:57:28.573522    5068 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0507 19:57:28.573522    5068 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0507 19:57:28.573522    5068 command_runner.go:130] > LimitNOFILE=infinity
	I0507 19:57:28.573522    5068 command_runner.go:130] > LimitNPROC=infinity
	I0507 19:57:28.573522    5068 command_runner.go:130] > LimitCORE=infinity
	I0507 19:57:28.573522    5068 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0507 19:57:28.573522    5068 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0507 19:57:28.573522    5068 command_runner.go:130] > TasksMax=infinity
	I0507 19:57:28.573522    5068 command_runner.go:130] > TimeoutStartSec=0
	I0507 19:57:28.573522    5068 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0507 19:57:28.573522    5068 command_runner.go:130] > Delegate=yes
	I0507 19:57:28.573522    5068 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0507 19:57:28.573522    5068 command_runner.go:130] > KillMode=process
	I0507 19:57:28.573522    5068 command_runner.go:130] > [Install]
	I0507 19:57:28.573522    5068 command_runner.go:130] > WantedBy=multi-user.target
	I0507 19:57:28.581894    5068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0507 19:57:28.610410    5068 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0507 19:57:28.642979    5068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0507 19:57:28.675846    5068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0507 19:57:28.705877    5068 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0507 19:57:28.781044    5068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0507 19:57:28.803136    5068 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0507 19:57:28.833979    5068 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0507 19:57:28.842832    5068 ssh_runner.go:195] Run: which cri-dockerd
	I0507 19:57:28.848207    5068 command_runner.go:130] > /usr/bin/cri-dockerd
	I0507 19:57:28.856131    5068 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0507 19:57:28.873661    5068 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0507 19:57:28.912027    5068 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0507 19:57:29.089243    5068 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0507 19:57:29.269042    5068 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0507 19:57:29.269042    5068 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0507 19:57:29.309728    5068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 19:57:29.485858    5068 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0507 19:57:32.045383    5068 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5593539s)
	I0507 19:57:32.052983    5068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0507 19:57:32.082651    5068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0507 19:57:32.113182    5068 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0507 19:57:32.292714    5068 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0507 19:57:32.465268    5068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 19:57:32.643153    5068 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0507 19:57:32.678993    5068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0507 19:57:32.709115    5068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 19:57:32.907572    5068 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0507 19:57:33.014085    5068 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0507 19:57:33.022649    5068 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0507 19:57:33.035385    5068 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0507 19:57:33.035385    5068 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0507 19:57:33.035385    5068 command_runner.go:130] > Device: 0,22	Inode: 849         Links: 1
	I0507 19:57:33.035385    5068 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0507 19:57:33.035385    5068 command_runner.go:130] > Access: 2024-05-07 19:57:33.174481224 +0000
	I0507 19:57:33.035385    5068 command_runner.go:130] > Modify: 2024-05-07 19:57:33.174481224 +0000
	I0507 19:57:33.035872    5068 command_runner.go:130] > Change: 2024-05-07 19:57:33.177481799 +0000
	I0507 19:57:33.035872    5068 command_runner.go:130] >  Birth: -
	I0507 19:57:33.035977    5068 start.go:562] Will wait 60s for crictl version
	I0507 19:57:33.045304    5068 ssh_runner.go:195] Run: which crictl
	I0507 19:57:33.051433    5068 command_runner.go:130] > /usr/bin/crictl
	I0507 19:57:33.059056    5068 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0507 19:57:33.106050    5068 command_runner.go:130] > Version:  0.1.0
	I0507 19:57:33.106050    5068 command_runner.go:130] > RuntimeName:  docker
	I0507 19:57:33.106114    5068 command_runner.go:130] > RuntimeVersion:  26.0.2
	I0507 19:57:33.106114    5068 command_runner.go:130] > RuntimeApiVersion:  v1
	I0507 19:57:33.107917    5068 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0507 19:57:33.114009    5068 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0507 19:57:33.140612    5068 command_runner.go:130] > 26.0.2
	I0507 19:57:33.146577    5068 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0507 19:57:33.172865    5068 command_runner.go:130] > 26.0.2
	I0507 19:57:33.175994    5068 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0507 19:57:33.179279    5068 out.go:177]   - env NO_PROXY=172.19.135.22
	I0507 19:57:33.181407    5068 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0507 19:57:33.185021    5068 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0507 19:57:33.185021    5068 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0507 19:57:33.185021    5068 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0507 19:57:33.185021    5068 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:a3:a5:4f Flags:up|broadcast|multicast|running}
	I0507 19:57:33.187961    5068 ip.go:210] interface addr: fe80::1edb:f5fd:c218:d8d2/64
	I0507 19:57:33.187994    5068 ip.go:210] interface addr: 172.19.128.1/20
	I0507 19:57:33.194911    5068 ssh_runner.go:195] Run: grep 172.19.128.1	host.minikube.internal$ /etc/hosts
	I0507 19:57:33.200606    5068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.128.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0507 19:57:33.223500    5068 mustload.go:65] Loading cluster: multinode-600000
	I0507 19:57:33.224088    5068 config.go:182] Loaded profile config "multinode-600000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 19:57:33.224671    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000 ).state
	I0507 19:57:35.068857    5068 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:57:35.068857    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:57:35.068857    5068 host.go:66] Checking if "multinode-600000" exists ...
	I0507 19:57:35.069407    5068 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-600000 for IP: 172.19.128.95
	I0507 19:57:35.069407    5068 certs.go:194] generating shared ca certs ...
	I0507 19:57:35.069407    5068 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 19:57:35.070223    5068 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0507 19:57:35.070426    5068 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0507 19:57:35.070676    5068 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0507 19:57:35.070813    5068 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0507 19:57:35.070919    5068 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0507 19:57:35.071075    5068 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0507 19:57:35.071273    5068 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\9992.pem (1338 bytes)
	W0507 19:57:35.071524    5068 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\9992_empty.pem, impossibly tiny 0 bytes
	I0507 19:57:35.071641    5068 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0507 19:57:35.071843    5068 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0507 19:57:35.072064    5068 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0507 19:57:35.072208    5068 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0507 19:57:35.072597    5068 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem (1708 bytes)
	I0507 19:57:35.072743    5068 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\9992.pem -> /usr/share/ca-certificates/9992.pem
	I0507 19:57:35.072811    5068 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem -> /usr/share/ca-certificates/99922.pem
	I0507 19:57:35.072948    5068 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0507 19:57:35.073083    5068 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0507 19:57:35.121779    5068 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0507 19:57:35.164624    5068 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0507 19:57:35.207213    5068 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0507 19:57:35.250386    5068 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\9992.pem --> /usr/share/ca-certificates/9992.pem (1338 bytes)
	I0507 19:57:35.291114    5068 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem --> /usr/share/ca-certificates/99922.pem (1708 bytes)
	I0507 19:57:35.337526    5068 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0507 19:57:35.388904    5068 ssh_runner.go:195] Run: openssl version
	I0507 19:57:35.397988    5068 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0507 19:57:35.405802    5068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/99922.pem && ln -fs /usr/share/ca-certificates/99922.pem /etc/ssl/certs/99922.pem"
	I0507 19:57:35.440153    5068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/99922.pem
	I0507 19:57:35.449293    5068 command_runner.go:130] > -rw-r--r-- 1 root root 1708 May  7 18:15 /usr/share/ca-certificates/99922.pem
	I0507 19:57:35.449378    5068 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  7 18:15 /usr/share/ca-certificates/99922.pem
	I0507 19:57:35.462101    5068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/99922.pem
	I0507 19:57:35.472462    5068 command_runner.go:130] > 3ec20f2e
	I0507 19:57:35.481949    5068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/99922.pem /etc/ssl/certs/3ec20f2e.0"
	I0507 19:57:35.508305    5068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0507 19:57:35.532710    5068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0507 19:57:35.539626    5068 command_runner.go:130] > -rw-r--r-- 1 root root 1111 May  7 18:01 /usr/share/ca-certificates/minikubeCA.pem
	I0507 19:57:35.539626    5068 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  7 18:01 /usr/share/ca-certificates/minikubeCA.pem
	I0507 19:57:35.551526    5068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0507 19:57:35.559757    5068 command_runner.go:130] > b5213941
	I0507 19:57:35.567325    5068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0507 19:57:35.595280    5068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9992.pem && ln -fs /usr/share/ca-certificates/9992.pem /etc/ssl/certs/9992.pem"
	I0507 19:57:35.623871    5068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9992.pem
	I0507 19:57:35.630909    5068 command_runner.go:130] > -rw-r--r-- 1 root root 1338 May  7 18:15 /usr/share/ca-certificates/9992.pem
	I0507 19:57:35.630980    5068 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  7 18:15 /usr/share/ca-certificates/9992.pem
	I0507 19:57:35.638242    5068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9992.pem
	I0507 19:57:35.646432    5068 command_runner.go:130] > 51391683
	I0507 19:57:35.654163    5068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9992.pem /etc/ssl/certs/51391683.0"
	I0507 19:57:35.683670    5068 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0507 19:57:35.689560    5068 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0507 19:57:35.689784    5068 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0507 19:57:35.689784    5068 kubeadm.go:928] updating node {m02 172.19.128.95 8443 v1.30.0 docker false true} ...
	I0507 19:57:35.689784    5068 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-600000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.128.95
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-600000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0507 19:57:35.697633    5068 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0507 19:57:35.715181    5068 command_runner.go:130] > kubeadm
	I0507 19:57:35.715181    5068 command_runner.go:130] > kubectl
	I0507 19:57:35.715181    5068 command_runner.go:130] > kubelet
	I0507 19:57:35.715181    5068 binaries.go:44] Found k8s binaries, skipping transfer
	I0507 19:57:35.723240    5068 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0507 19:57:35.737252    5068 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (320 bytes)
	I0507 19:57:35.766437    5068 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0507 19:57:35.804650    5068 ssh_runner.go:195] Run: grep 172.19.135.22	control-plane.minikube.internal$ /etc/hosts
	I0507 19:57:35.810768    5068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.135.22	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0507 19:57:35.843111    5068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 19:57:36.029398    5068 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0507 19:57:36.057161    5068 host.go:66] Checking if "multinode-600000" exists ...
	I0507 19:57:36.058283    5068 start.go:316] joinCluster: &{Name:multinode-600000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
0 ClusterName:multinode-600000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.135.22 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.128.95 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.19.129.4 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:f
alse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disa
bleOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0507 19:57:36.058339    5068 start.go:329] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:172.19.128.95 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0507 19:57:36.058339    5068 host.go:66] Checking if "multinode-600000-m02" exists ...
	I0507 19:57:36.059305    5068 mustload.go:65] Loading cluster: multinode-600000
	I0507 19:57:36.059910    5068 config.go:182] Loaded profile config "multinode-600000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 19:57:36.060515    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000 ).state
	I0507 19:57:37.970722    5068 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:57:37.970775    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:57:37.970775    5068 host.go:66] Checking if "multinode-600000" exists ...
	I0507 19:57:37.971299    5068 api_server.go:166] Checking apiserver status ...
	I0507 19:57:37.979080    5068 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0507 19:57:37.979638    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000 ).state
	I0507 19:57:39.910729    5068 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:57:39.911257    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:57:39.911321    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000 ).networkadapters[0]).ipaddresses[0]
	I0507 19:57:42.161213    5068 main.go:141] libmachine: [stdout =====>] : 172.19.135.22
	
	I0507 19:57:42.161213    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:57:42.161566    5068 sshutil.go:53] new ssh client: &{IP:172.19.135.22 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-600000\id_rsa Username:docker}
	I0507 19:57:42.268833    5068 command_runner.go:130] > 1882
	I0507 19:57:42.269017    5068 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.2895301s)
	I0507 19:57:42.280629    5068 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1882/cgroup
	W0507 19:57:42.298497    5068 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1882/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0507 19:57:42.307923    5068 ssh_runner.go:195] Run: ls
	I0507 19:57:42.313946    5068 api_server.go:253] Checking apiserver healthz at https://172.19.135.22:8443/healthz ...
	I0507 19:57:42.320339    5068 api_server.go:279] https://172.19.135.22:8443/healthz returned 200:
	ok
	I0507 19:57:42.328204    5068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl drain multinode-600000-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data
	I0507 19:57:42.469457    5068 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-jmlw2, kube-system/kube-proxy-9fb6t
	I0507 19:57:45.490733    5068 command_runner.go:130] > node/multinode-600000-m02 cordoned
	I0507 19:57:45.490733    5068 command_runner.go:130] > pod "busybox-fc5497c4f-cpw2r" has DeletionTimestamp older than 1 seconds, skipping
	I0507 19:57:45.490733    5068 command_runner.go:130] > node/multinode-600000-m02 drained
	I0507 19:57:45.490733    5068 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl drain multinode-600000-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data: (3.1623163s)
	I0507 19:57:45.490733    5068 node.go:128] successfully drained node "multinode-600000-m02"
	I0507 19:57:45.490733    5068 ssh_runner.go:195] Run: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock"
	I0507 19:57:45.490733    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000-m02 ).state
	I0507 19:57:47.381083    5068 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:57:47.381186    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:57:47.381330    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 19:57:49.648547    5068 main.go:141] libmachine: [stdout =====>] : 172.19.128.95
	
	I0507 19:57:49.648688    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:57:49.649077    5068 sshutil.go:53] new ssh client: &{IP:172.19.128.95 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-600000-m02\id_rsa Username:docker}
	I0507 19:57:50.031995    5068 command_runner.go:130] ! W0507 19:57:50.277763    1524 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
	I0507 19:57:50.526376    5068 command_runner.go:130] ! W0507 19:57:50.771134    1524 cleanupnode.go:106] [reset] Failed to remove containers: failed to stop running pod 4298851cae09932972cf0557c11b037116961ba8030ca0a91a3839898122206a: output: E0507 19:57:50.512972    1561 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"busybox-fc5497c4f-cpw2r_default\" network: cni config uninitialized" podSandboxID="4298851cae09932972cf0557c11b037116961ba8030ca0a91a3839898122206a"
	I0507 19:57:50.526489    5068 command_runner.go:130] ! time="2024-05-07T19:57:50Z" level=fatal msg="stopping the pod sandbox \"4298851cae09932972cf0557c11b037116961ba8030ca0a91a3839898122206a\": rpc error: code = Unknown desc = networkPlugin cni failed to teardown pod \"busybox-fc5497c4f-cpw2r_default\" network: cni config uninitialized"
	I0507 19:57:50.526489    5068 command_runner.go:130] ! : exit status 1
	I0507 19:57:50.545993    5068 command_runner.go:130] > [preflight] Running pre-flight checks
	I0507 19:57:50.547055    5068 command_runner.go:130] > [reset] Deleted contents of the etcd data directory: /var/lib/etcd
	I0507 19:57:50.547055    5068 command_runner.go:130] > [reset] Stopping the kubelet service
	I0507 19:57:50.547055    5068 command_runner.go:130] > [reset] Unmounting mounted directories in "/var/lib/kubelet"
	I0507 19:57:50.547195    5068 command_runner.go:130] > [reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
	I0507 19:57:50.547267    5068 command_runner.go:130] > [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
	I0507 19:57:50.547320    5068 command_runner.go:130] > The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
	I0507 19:57:50.547351    5068 command_runner.go:130] > The reset process does not reset or clean up iptables rules or IPVS tables.
	I0507 19:57:50.547445    5068 command_runner.go:130] > If you wish to reset iptables, you must do so manually by using the "iptables" command.
	I0507 19:57:50.547517    5068 command_runner.go:130] > If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
	I0507 19:57:50.547530    5068 command_runner.go:130] > to reset your system's IPVS tables.
	I0507 19:57:50.547530    5068 command_runner.go:130] > The reset process does not clean your kubeconfig files and you must remove them manually.
	I0507 19:57:50.547629    5068 command_runner.go:130] > Please, check the contents of the $HOME/.kube/config file.
	I0507 19:57:50.547658    5068 ssh_runner.go:235] Completed: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock": (5.0565841s)
	I0507 19:57:50.547658    5068 node.go:155] successfully reset node "multinode-600000-m02"
	I0507 19:57:50.549004    5068 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0507 19:57:50.550181    5068 kapi.go:59] client config for multinode-600000: &rest.Config{Host:"https://172.19.135.22:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-600000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-600000\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2655b00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0507 19:57:50.551127    5068 cert_rotation.go:137] Starting client certificate rotation controller
	I0507 19:57:50.551500    5068 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0507 19:57:50.551500    5068 round_trippers.go:463] DELETE https://172.19.135.22:8443/api/v1/nodes/multinode-600000-m02
	I0507 19:57:50.551581    5068 round_trippers.go:469] Request Headers:
	I0507 19:57:50.551581    5068 round_trippers.go:473]     Content-Type: application/json
	I0507 19:57:50.551581    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:57:50.551581    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:57:50.568076    5068 round_trippers.go:574] Response Status: 200 OK in 16 milliseconds
	I0507 19:57:50.568076    5068 round_trippers.go:577] Response Headers:
	I0507 19:57:50.568076    5068 round_trippers.go:580]     Content-Length: 171
	I0507 19:57:50.568076    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:57:50 GMT
	I0507 19:57:50.568076    5068 round_trippers.go:580]     Audit-Id: 7f970cde-f87a-4b7e-81f8-02bb507652c2
	I0507 19:57:50.568076    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:57:50.568076    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:57:50.568076    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:57:50.568076    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:57:50.568533    5068 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-600000-m02","kind":"nodes","uid":"4aaf533a-c21c-427b-b48f-82fef83a8fb3"}}
	I0507 19:57:50.568533    5068 node.go:180] successfully deleted node "multinode-600000-m02"
	I0507 19:57:50.568605    5068 start.go:333] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:172.19.128.95 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0507 19:57:50.568639    5068 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0507 19:57:50.568721    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000 ).state
	I0507 19:57:52.467461    5068 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:57:52.467461    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:57:52.467972    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000 ).networkadapters[0]).ipaddresses[0]
	I0507 19:57:54.746545    5068 main.go:141] libmachine: [stdout =====>] : 172.19.135.22
	
	I0507 19:57:54.746545    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:57:54.746625    5068 sshutil.go:53] new ssh client: &{IP:172.19.135.22 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-600000\id_rsa Username:docker}
	I0507 19:57:54.953070    5068 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token uswf23.agx8fhn2ko467co0 --discovery-token-ca-cert-hash sha256:931f752ca063cc161db9d00a66e1e235f9a673b9dc0e49228e9ec99d810de7b1 
	I0507 19:57:54.953070    5068 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0": (4.384135s)
	I0507 19:57:54.953070    5068 start.go:342] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.19.128.95 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0507 19:57:54.953070    5068 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token uswf23.agx8fhn2ko467co0 --discovery-token-ca-cert-hash sha256:931f752ca063cc161db9d00a66e1e235f9a673b9dc0e49228e9ec99d810de7b1 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-600000-m02"
	I0507 19:57:55.015332    5068 command_runner.go:130] > [preflight] Running pre-flight checks
	I0507 19:57:55.213916    5068 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0507 19:57:55.214602    5068 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0507 19:57:55.279502    5068 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0507 19:57:55.279502    5068 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0507 19:57:55.279502    5068 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0507 19:57:55.467946    5068 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0507 19:57:55.968722    5068 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 502.490866ms
	I0507 19:57:55.968899    5068 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0507 19:57:56.000400    5068 command_runner.go:130] > This node has joined the cluster:
	I0507 19:57:56.000468    5068 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0507 19:57:56.000468    5068 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0507 19:57:56.000468    5068 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0507 19:57:56.005791    5068 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0507 19:57:56.005791    5068 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token uswf23.agx8fhn2ko467co0 --discovery-token-ca-cert-hash sha256:931f752ca063cc161db9d00a66e1e235f9a673b9dc0e49228e9ec99d810de7b1 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-600000-m02": (1.0526499s)
	I0507 19:57:56.005883    5068 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0507 19:57:56.207296    5068 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0507 19:57:56.400922    5068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-600000-m02 minikube.k8s.io/updated_at=2024_05_07T19_57_56_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=a2bee053733709aad5480b65159f65519e411d9f minikube.k8s.io/name=multinode-600000 minikube.k8s.io/primary=false
	I0507 19:57:56.513181    5068 command_runner.go:130] > node/multinode-600000-m02 labeled
	I0507 19:57:56.513327    5068 start.go:318] duration metric: took 20.4536665s to joinCluster
	I0507 19:57:56.513594    5068 start.go:234] Will wait 6m0s for node &{Name:m02 IP:172.19.128.95 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0507 19:57:56.516902    5068 out.go:177] * Verifying Kubernetes components...
	I0507 19:57:56.514437    5068 config.go:182] Loaded profile config "multinode-600000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 19:57:56.528567    5068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 19:57:56.716237    5068 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0507 19:57:56.742296    5068 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0507 19:57:56.743461    5068 kapi.go:59] client config for multinode-600000: &rest.Config{Host:"https://172.19.135.22:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-600000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-600000\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2655b00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0507 19:57:56.744411    5068 node_ready.go:35] waiting up to 6m0s for node "multinode-600000-m02" to be "Ready" ...
	I0507 19:57:56.744578    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000-m02
	I0507 19:57:56.744628    5068 round_trippers.go:469] Request Headers:
	I0507 19:57:56.744628    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:57:56.744674    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:57:56.749803    5068 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 19:57:56.749803    5068 round_trippers.go:577] Response Headers:
	I0507 19:57:56.749803    5068 round_trippers.go:580]     Audit-Id: a11b36f0-d5d9-448e-bd52-f91e1c667b1f
	I0507 19:57:56.749803    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:57:56.749803    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:57:56.749803    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:57:56.749803    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:57:56.749803    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:57:56 GMT
	I0507 19:57:56.749803    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000-m02","uid":"ecb65c2c-9ac5-44bc-9509-f0c59100949c","resourceVersion":"2025","creationTimestamp":"2024-05-07T19:57:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_07T19_57_56_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:57:56Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3674 chars]
	I0507 19:57:57.251355    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000-m02
	I0507 19:57:57.251355    5068 round_trippers.go:469] Request Headers:
	I0507 19:57:57.251733    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:57:57.251733    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:57:57.255008    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:57:57.255201    5068 round_trippers.go:577] Response Headers:
	I0507 19:57:57.255201    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:57:57.255201    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:57:57.255201    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:57:57.255201    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:57:57.255201    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:57:57 GMT
	I0507 19:57:57.255201    5068 round_trippers.go:580]     Audit-Id: 1c5d39f4-6bca-4849-b40a-41ae2e875be8
	I0507 19:57:57.255405    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000-m02","uid":"ecb65c2c-9ac5-44bc-9509-f0c59100949c","resourceVersion":"2025","creationTimestamp":"2024-05-07T19:57:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_07T19_57_56_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:57:56Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3674 chars]
	I0507 19:57:57.759424    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000-m02
	I0507 19:57:57.759424    5068 round_trippers.go:469] Request Headers:
	I0507 19:57:57.759424    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:57:57.759424    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:57:57.763033    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:57:57.763078    5068 round_trippers.go:577] Response Headers:
	I0507 19:57:57.763138    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:57:57.763138    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:57:57.763138    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:57:58 GMT
	I0507 19:57:57.763138    5068 round_trippers.go:580]     Audit-Id: 0374eaec-7bf7-4303-9555-f9cfae80011a
	I0507 19:57:57.763174    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:57:57.763174    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:57:57.763458    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000-m02","uid":"ecb65c2c-9ac5-44bc-9509-f0c59100949c","resourceVersion":"2025","creationTimestamp":"2024-05-07T19:57:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_07T19_57_56_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:57:56Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3674 chars]
	I0507 19:57:58.251336    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000-m02
	I0507 19:57:58.251336    5068 round_trippers.go:469] Request Headers:
	I0507 19:57:58.251336    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:57:58.251336    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:57:58.253902    5068 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:57:58.253902    5068 round_trippers.go:577] Response Headers:
	I0507 19:57:58.253902    5068 round_trippers.go:580]     Audit-Id: 63ecbb10-c366-4f41-9542-9a1422e97db4
	I0507 19:57:58.253902    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:57:58.253902    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:57:58.253902    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:57:58.253902    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:57:58.253902    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:57:58 GMT
	I0507 19:57:58.254978    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000-m02","uid":"ecb65c2c-9ac5-44bc-9509-f0c59100949c","resourceVersion":"2025","creationTimestamp":"2024-05-07T19:57:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_07T19_57_56_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:57:56Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3674 chars]
	I0507 19:57:58.752938    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000-m02
	I0507 19:57:58.753036    5068 round_trippers.go:469] Request Headers:
	I0507 19:57:58.753036    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:57:58.753036    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:57:58.757464    5068 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:57:58.758383    5068 round_trippers.go:577] Response Headers:
	I0507 19:57:58.758383    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:57:58.758518    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:57:58.758518    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:57:58.758518    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:57:58.758518    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:57:58 GMT
	I0507 19:57:58.758518    5068 round_trippers.go:580]     Audit-Id: fdb176a8-7b5a-446c-9f62-73ce6cda4dc4
	I0507 19:57:58.758606    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000-m02","uid":"ecb65c2c-9ac5-44bc-9509-f0c59100949c","resourceVersion":"2025","creationTimestamp":"2024-05-07T19:57:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_07T19_57_56_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:57:56Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3674 chars]
	I0507 19:57:58.759450    5068 node_ready.go:53] node "multinode-600000-m02" has status "Ready":"False"
	I0507 19:57:59.253328    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000-m02
	I0507 19:57:59.253571    5068 round_trippers.go:469] Request Headers:
	I0507 19:57:59.253571    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:57:59.253571    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:57:59.258146    5068 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:57:59.258240    5068 round_trippers.go:577] Response Headers:
	I0507 19:57:59.258240    5068 round_trippers.go:580]     Audit-Id: d99d73b1-3207-49d0-9fd3-79e8d2a475d2
	I0507 19:57:59.258240    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:57:59.258240    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:57:59.258410    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:57:59.258410    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:57:59.258458    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:57:59 GMT
	I0507 19:57:59.258773    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000-m02","uid":"ecb65c2c-9ac5-44bc-9509-f0c59100949c","resourceVersion":"2025","creationTimestamp":"2024-05-07T19:57:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_07T19_57_56_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:57:56Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3674 chars]
	I0507 19:57:59.754500    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000-m02
	I0507 19:57:59.754570    5068 round_trippers.go:469] Request Headers:
	I0507 19:57:59.754570    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:57:59.754570    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:57:59.760828    5068 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0507 19:57:59.760828    5068 round_trippers.go:577] Response Headers:
	I0507 19:57:59.760828    5068 round_trippers.go:580]     Audit-Id: 345bbdee-9ce6-404c-8e43-3e156d2ea9e3
	I0507 19:57:59.760828    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:57:59.760828    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:57:59.760828    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:57:59.760828    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:57:59.760828    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:58:00 GMT
	I0507 19:57:59.761533    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000-m02","uid":"ecb65c2c-9ac5-44bc-9509-f0c59100949c","resourceVersion":"2025","creationTimestamp":"2024-05-07T19:57:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_07T19_57_56_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:57:56Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3674 chars]
	I0507 19:58:00.254223    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000-m02
	I0507 19:58:00.254223    5068 round_trippers.go:469] Request Headers:
	I0507 19:58:00.254223    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:58:00.254223    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:58:00.258983    5068 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:58:00.258983    5068 round_trippers.go:577] Response Headers:
	I0507 19:58:00.258983    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:58:00.258983    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:58:00.258983    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:58:00 GMT
	I0507 19:58:00.258983    5068 round_trippers.go:580]     Audit-Id: 0651bf21-8ab9-4e3c-8f44-5c4bdcc847d7
	I0507 19:58:00.258983    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:58:00.258983    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:58:00.260087    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000-m02","uid":"ecb65c2c-9ac5-44bc-9509-f0c59100949c","resourceVersion":"2025","creationTimestamp":"2024-05-07T19:57:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_07T19_57_56_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:57:56Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3674 chars]
	I0507 19:58:00.754157    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000-m02
	I0507 19:58:00.754157    5068 round_trippers.go:469] Request Headers:
	I0507 19:58:00.754432    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:58:00.754432    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:58:00.757912    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:58:00.757912    5068 round_trippers.go:577] Response Headers:
	I0507 19:58:00.757912    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:58:00 GMT
	I0507 19:58:00.757912    5068 round_trippers.go:580]     Audit-Id: cbbc3995-1535-46eb-b45a-78545603f88b
	I0507 19:58:00.757912    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:58:00.758016    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:58:00.758016    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:58:00.758016    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:58:00.758218    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000-m02","uid":"ecb65c2c-9ac5-44bc-9509-f0c59100949c","resourceVersion":"2025","creationTimestamp":"2024-05-07T19:57:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_07T19_57_56_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:57:56Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3674 chars]
	I0507 19:58:01.254716    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000-m02
	I0507 19:58:01.254819    5068 round_trippers.go:469] Request Headers:
	I0507 19:58:01.254819    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:58:01.254909    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:58:01.261253    5068 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0507 19:58:01.261778    5068 round_trippers.go:577] Response Headers:
	I0507 19:58:01.261916    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:58:01 GMT
	I0507 19:58:01.261916    5068 round_trippers.go:580]     Audit-Id: 48dac9a6-c6dd-4e1e-be26-632f59c19523
	I0507 19:58:01.261916    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:58:01.261916    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:58:01.261916    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:58:01.261916    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:58:01.262209    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000-m02","uid":"ecb65c2c-9ac5-44bc-9509-f0c59100949c","resourceVersion":"2025","creationTimestamp":"2024-05-07T19:57:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_07T19_57_56_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:57:56Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3674 chars]
	I0507 19:58:01.263002    5068 node_ready.go:53] node "multinode-600000-m02" has status "Ready":"False"
	I0507 19:58:01.755940    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000-m02
	I0507 19:58:01.755940    5068 round_trippers.go:469] Request Headers:
	I0507 19:58:01.755940    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:58:01.755940    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:58:01.763082    5068 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0507 19:58:01.763082    5068 round_trippers.go:577] Response Headers:
	I0507 19:58:01.763082    5068 round_trippers.go:580]     Audit-Id: 36be267d-9fdf-4821-a9d6-b02185495a9c
	I0507 19:58:01.763082    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:58:01.763082    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:58:01.763082    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:58:01.763082    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:58:01.763082    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:58:02 GMT
	I0507 19:58:01.763082    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000-m02","uid":"ecb65c2c-9ac5-44bc-9509-f0c59100949c","resourceVersion":"2025","creationTimestamp":"2024-05-07T19:57:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_07T19_57_56_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:57:56Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3674 chars]
	I0507 19:58:02.253062    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000-m02
	I0507 19:58:02.253062    5068 round_trippers.go:469] Request Headers:
	I0507 19:58:02.253062    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:58:02.253062    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:58:02.260055    5068 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0507 19:58:02.260055    5068 round_trippers.go:577] Response Headers:
	I0507 19:58:02.260106    5068 round_trippers.go:580]     Audit-Id: 69c23213-80e7-4f0f-a565-e22fb82c67f8
	I0507 19:58:02.260106    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:58:02.260106    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:58:02.260106    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:58:02.260106    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:58:02.260106    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:58:02 GMT
	I0507 19:58:02.260106    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000-m02","uid":"ecb65c2c-9ac5-44bc-9509-f0c59100949c","resourceVersion":"2025","creationTimestamp":"2024-05-07T19:57:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_07T19_57_56_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:57:56Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3674 chars]
	I0507 19:58:02.753665    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000-m02
	I0507 19:58:02.753665    5068 round_trippers.go:469] Request Headers:
	I0507 19:58:02.753665    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:58:02.753665    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:58:02.757365    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:58:02.757579    5068 round_trippers.go:577] Response Headers:
	I0507 19:58:02.757579    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:58:02.757579    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:58:02 GMT
	I0507 19:58:02.757579    5068 round_trippers.go:580]     Audit-Id: aa19fec2-efed-4300-b0ca-0d3201f94bd8
	I0507 19:58:02.757579    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:58:02.757579    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:58:02.757579    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:58:02.757579    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000-m02","uid":"ecb65c2c-9ac5-44bc-9509-f0c59100949c","resourceVersion":"2025","creationTimestamp":"2024-05-07T19:57:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_07T19_57_56_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:57:56Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3674 chars]
	I0507 19:58:03.253482    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000-m02
	I0507 19:58:03.253546    5068 round_trippers.go:469] Request Headers:
	I0507 19:58:03.253576    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:58:03.253576    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:58:03.261044    5068 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0507 19:58:03.261044    5068 round_trippers.go:577] Response Headers:
	I0507 19:58:03.261044    5068 round_trippers.go:580]     Audit-Id: ee9b4d5c-2149-4e65-b28b-8a45d04c1045
	I0507 19:58:03.261044    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:58:03.261044    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:58:03.261044    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:58:03.261044    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:58:03.261044    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:58:03 GMT
	I0507 19:58:03.262025    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000-m02","uid":"ecb65c2c-9ac5-44bc-9509-f0c59100949c","resourceVersion":"2053","creationTimestamp":"2024-05-07T19:57:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_07T19_57_56_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:57:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3932 chars]
	I0507 19:58:03.262025    5068 node_ready.go:49] node "multinode-600000-m02" has status "Ready":"True"
	I0507 19:58:03.262025    5068 node_ready.go:38] duration metric: took 6.5171742s for node "multinode-600000-m02" to be "Ready" ...
	I0507 19:58:03.262025    5068 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0507 19:58:03.262025    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods
	I0507 19:58:03.262025    5068 round_trippers.go:469] Request Headers:
	I0507 19:58:03.262025    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:58:03.262025    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:58:03.266821    5068 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:58:03.266821    5068 round_trippers.go:577] Response Headers:
	I0507 19:58:03.266821    5068 round_trippers.go:580]     Audit-Id: daee96b6-97ca-4633-ae82-dabd5c6c00e0
	I0507 19:58:03.266821    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:58:03.266821    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:58:03.266821    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:58:03.266821    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:58:03.266821    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:58:03 GMT
	I0507 19:58:03.268377    5068 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2055"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"1873","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86092 chars]
	I0507 19:58:03.271686    5068 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-5j966" in "kube-system" namespace to be "Ready" ...
	I0507 19:58:03.271686    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5j966
	I0507 19:58:03.271686    5068 round_trippers.go:469] Request Headers:
	I0507 19:58:03.271686    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:58:03.271686    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:58:03.274902    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:58:03.275199    5068 round_trippers.go:577] Response Headers:
	I0507 19:58:03.275199    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:58:03.275199    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:58:03.275199    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:58:03 GMT
	I0507 19:58:03.275199    5068 round_trippers.go:580]     Audit-Id: fbed3d7a-dda2-46bd-b57b-42121e568778
	I0507 19:58:03.275199    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:58:03.275199    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:58:03.275458    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"1873","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6788 chars]
	I0507 19:58:03.276650    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:58:03.276714    5068 round_trippers.go:469] Request Headers:
	I0507 19:58:03.276714    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:58:03.276714    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:58:03.278566    5068 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0507 19:58:03.278566    5068 round_trippers.go:577] Response Headers:
	I0507 19:58:03.278566    5068 round_trippers.go:580]     Audit-Id: 927c2fd9-4075-4d9a-b005-640b7a9bc002
	I0507 19:58:03.278566    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:58:03.278566    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:58:03.278566    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:58:03.278566    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:58:03.278566    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:58:03 GMT
	I0507 19:58:03.279232    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1836","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0507 19:58:03.279232    5068 pod_ready.go:92] pod "coredns-7db6d8ff4d-5j966" in "kube-system" namespace has status "Ready":"True"
	I0507 19:58:03.279232    5068 pod_ready.go:81] duration metric: took 7.5459ms for pod "coredns-7db6d8ff4d-5j966" in "kube-system" namespace to be "Ready" ...
	I0507 19:58:03.279232    5068 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-600000" in "kube-system" namespace to be "Ready" ...
	I0507 19:58:03.280286    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-600000
	I0507 19:58:03.280286    5068 round_trippers.go:469] Request Headers:
	I0507 19:58:03.280286    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:58:03.280286    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:58:03.285410    5068 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 19:58:03.285410    5068 round_trippers.go:577] Response Headers:
	I0507 19:58:03.285410    5068 round_trippers.go:580]     Audit-Id: 34003154-e03a-47fa-866e-015387c11270
	I0507 19:58:03.285410    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:58:03.285410    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:58:03.285410    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:58:03.285410    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:58:03.285410    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:58:03 GMT
	I0507 19:58:03.285911    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-600000","namespace":"kube-system","uid":"de6e93ee-7fd0-45cd-82eb-44edd4a2c2e3","resourceVersion":"1798","creationTimestamp":"2024-05-07T19:54:33Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.135.22:2379","kubernetes.io/config.hash":"1581bf6b00d338797c8fb8b10b74abde","kubernetes.io/config.mirror":"1581bf6b00d338797c8fb8b10b74abde","kubernetes.io/config.seen":"2024-05-07T19:54:28.831640546Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:54:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6160 chars]
	I0507 19:58:03.285934    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:58:03.285934    5068 round_trippers.go:469] Request Headers:
	I0507 19:58:03.285934    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:58:03.285934    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:58:03.292228    5068 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0507 19:58:03.292228    5068 round_trippers.go:577] Response Headers:
	I0507 19:58:03.292228    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:58:03.292228    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:58:03 GMT
	I0507 19:58:03.292228    5068 round_trippers.go:580]     Audit-Id: a3106eaf-5f91-4926-942d-3638ec9eea76
	I0507 19:58:03.292228    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:58:03.292228    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:58:03.292228    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:58:03.292228    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1836","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0507 19:58:03.292890    5068 pod_ready.go:92] pod "etcd-multinode-600000" in "kube-system" namespace has status "Ready":"True"
	I0507 19:58:03.292890    5068 pod_ready.go:81] duration metric: took 13.6569ms for pod "etcd-multinode-600000" in "kube-system" namespace to be "Ready" ...
	I0507 19:58:03.292890    5068 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-600000" in "kube-system" namespace to be "Ready" ...
	I0507 19:58:03.292890    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-600000
	I0507 19:58:03.292890    5068 round_trippers.go:469] Request Headers:
	I0507 19:58:03.292890    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:58:03.292890    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:58:03.296155    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:58:03.296155    5068 round_trippers.go:577] Response Headers:
	I0507 19:58:03.296155    5068 round_trippers.go:580]     Audit-Id: 8d9a4611-bbac-4f12-b2cf-99a76178afae
	I0507 19:58:03.296155    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:58:03.296155    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:58:03.296155    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:58:03.296155    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:58:03.296155    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:58:03 GMT
	I0507 19:58:03.296629    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-600000","namespace":"kube-system","uid":"4d9ace3f-e061-42ab-bb1d-3dac545f96a9","resourceVersion":"1795","creationTimestamp":"2024-05-07T19:54:35Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.135.22:8443","kubernetes.io/config.hash":"cd9cba8f94818776ec6d8836322192b3","kubernetes.io/config.mirror":"cd9cba8f94818776ec6d8836322192b3","kubernetes.io/config.seen":"2024-05-07T19:54:28.735132188Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:54:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7695 chars]
	I0507 19:58:03.296782    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:58:03.296782    5068 round_trippers.go:469] Request Headers:
	I0507 19:58:03.296782    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:58:03.296782    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:58:03.299348    5068 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:58:03.299764    5068 round_trippers.go:577] Response Headers:
	I0507 19:58:03.299764    5068 round_trippers.go:580]     Audit-Id: 58b6ad50-686d-4d78-9be0-44c69c84b3a1
	I0507 19:58:03.299764    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:58:03.299764    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:58:03.299764    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:58:03.299764    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:58:03.299829    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:58:03 GMT
	I0507 19:58:03.299955    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1836","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0507 19:58:03.300477    5068 pod_ready.go:92] pod "kube-apiserver-multinode-600000" in "kube-system" namespace has status "Ready":"True"
	I0507 19:58:03.300477    5068 pod_ready.go:81] duration metric: took 7.586ms for pod "kube-apiserver-multinode-600000" in "kube-system" namespace to be "Ready" ...
	I0507 19:58:03.300477    5068 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-600000" in "kube-system" namespace to be "Ready" ...
	I0507 19:58:03.300477    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-600000
	I0507 19:58:03.300477    5068 round_trippers.go:469] Request Headers:
	I0507 19:58:03.300477    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:58:03.300477    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:58:03.302328    5068 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0507 19:58:03.302328    5068 round_trippers.go:577] Response Headers:
	I0507 19:58:03.302328    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:58:03.302328    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:58:03.302328    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:58:03.303376    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:58:03.303376    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:58:03 GMT
	I0507 19:58:03.303376    5068 round_trippers.go:580]     Audit-Id: b266f57f-95e2-4a24-9e46-a2970cfec430
	I0507 19:58:03.303592    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-600000","namespace":"kube-system","uid":"b960b526-da40-480d-9a72-9ab8c7f2989a","resourceVersion":"1797","creationTimestamp":"2024-05-07T19:33:43Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f5d6aa60dc93b5e562f37ed2236c3022","kubernetes.io/config.mirror":"f5d6aa60dc93b5e562f37ed2236c3022","kubernetes.io/config.seen":"2024-05-07T19:33:37.010155750Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7470 chars]
	I0507 19:58:03.303592    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:58:03.304129    5068 round_trippers.go:469] Request Headers:
	I0507 19:58:03.304129    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:58:03.304129    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:58:03.306824    5068 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:58:03.306925    5068 round_trippers.go:577] Response Headers:
	I0507 19:58:03.306964    5068 round_trippers.go:580]     Audit-Id: f8845898-6986-4780-96ef-8220f812ef15
	I0507 19:58:03.306964    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:58:03.306964    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:58:03.306988    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:58:03.306988    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:58:03.306988    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:58:03 GMT
	I0507 19:58:03.306988    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1836","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0507 19:58:03.306988    5068 pod_ready.go:92] pod "kube-controller-manager-multinode-600000" in "kube-system" namespace has status "Ready":"True"
	I0507 19:58:03.306988    5068 pod_ready.go:81] duration metric: took 6.5111ms for pod "kube-controller-manager-multinode-600000" in "kube-system" namespace to be "Ready" ...
	I0507 19:58:03.306988    5068 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9fb6t" in "kube-system" namespace to be "Ready" ...
	I0507 19:58:03.454871    5068 request.go:629] Waited for 147.5758ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9fb6t
	I0507 19:58:03.455189    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9fb6t
	I0507 19:58:03.455432    5068 round_trippers.go:469] Request Headers:
	I0507 19:58:03.455432    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:58:03.455432    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:58:03.458709    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:58:03.458709    5068 round_trippers.go:577] Response Headers:
	I0507 19:58:03.458709    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:58:03.458709    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:58:03 GMT
	I0507 19:58:03.458709    5068 round_trippers.go:580]     Audit-Id: f3c142c8-7d9e-4b84-ba9e-1ab70da6d547
	I0507 19:58:03.458709    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:58:03.458709    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:58:03.458709    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:58:03.459014    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-9fb6t","generateName":"kube-proxy-","namespace":"kube-system","uid":"f91cc93c-cb87-4494-9e11-b3bf74b9311d","resourceVersion":"2040","creationTimestamp":"2024-05-07T19:36:39Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"952e0024-0710-460c-920c-3959ceadbd10","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:36:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"952e0024-0710-460c-920c-3959ceadbd10\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5837 chars]
	I0507 19:58:03.658249    5068 request.go:629] Waited for 198.6941ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.135.22:8443/api/v1/nodes/multinode-600000-m02
	I0507 19:58:03.658510    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000-m02
	I0507 19:58:03.658821    5068 round_trippers.go:469] Request Headers:
	I0507 19:58:03.658821    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:58:03.658821    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:58:03.662209    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:58:03.662209    5068 round_trippers.go:577] Response Headers:
	I0507 19:58:03.662209    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:58:03.662209    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:58:03.662209    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:58:03 GMT
	I0507 19:58:03.662209    5068 round_trippers.go:580]     Audit-Id: 93f4910e-230b-4769-9d89-5edcc87a318c
	I0507 19:58:03.662379    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:58:03.662379    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:58:03.662694    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000-m02","uid":"ecb65c2c-9ac5-44bc-9509-f0c59100949c","resourceVersion":"2053","creationTimestamp":"2024-05-07T19:57:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_07T19_57_56_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:57:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3932 chars]
	I0507 19:58:03.663341    5068 pod_ready.go:92] pod "kube-proxy-9fb6t" in "kube-system" namespace has status "Ready":"True"
	I0507 19:58:03.663426    5068 pod_ready.go:81] duration metric: took 356.3284ms for pod "kube-proxy-9fb6t" in "kube-system" namespace to be "Ready" ...
	I0507 19:58:03.663426    5068 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-c9gw5" in "kube-system" namespace to be "Ready" ...
	I0507 19:58:03.860708    5068 request.go:629] Waited for 197.1796ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c9gw5
	I0507 19:58:03.860943    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c9gw5
	I0507 19:58:03.860943    5068 round_trippers.go:469] Request Headers:
	I0507 19:58:03.860943    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:58:03.860943    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:58:03.864877    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:58:03.864877    5068 round_trippers.go:577] Response Headers:
	I0507 19:58:03.864877    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:58:03.864877    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:58:03.864877    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:58:03.864877    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:58:03.864877    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:58:04 GMT
	I0507 19:58:03.864877    5068 round_trippers.go:580]     Audit-Id: 5a20b0fd-8d9c-4412-bf2e-9925bf91876c
	I0507 19:58:03.865415    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-c9gw5","generateName":"kube-proxy-","namespace":"kube-system","uid":"9a39807c-6243-4aa2-86f4-8626031c80a6","resourceVersion":"1759","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"952e0024-0710-460c-920c-3959ceadbd10","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"952e0024-0710-460c-920c-3959ceadbd10\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6034 chars]
	I0507 19:58:04.063510    5068 request.go:629] Waited for 197.1054ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:58:04.063781    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:58:04.063885    5068 round_trippers.go:469] Request Headers:
	I0507 19:58:04.063885    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:58:04.063885    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:58:04.067012    5068 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:58:04.067073    5068 round_trippers.go:577] Response Headers:
	I0507 19:58:04.067130    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:58:04.067130    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:58:04 GMT
	I0507 19:58:04.067130    5068 round_trippers.go:580]     Audit-Id: 96f3252a-54ac-4769-a00e-cf7e31520d37
	I0507 19:58:04.067130    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:58:04.067130    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:58:04.067130    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:58:04.067983    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1836","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0507 19:58:04.068719    5068 pod_ready.go:92] pod "kube-proxy-c9gw5" in "kube-system" namespace has status "Ready":"True"
	I0507 19:58:04.068719    5068 pod_ready.go:81] duration metric: took 405.266ms for pod "kube-proxy-c9gw5" in "kube-system" namespace to be "Ready" ...
	I0507 19:58:04.068797    5068 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pzn8q" in "kube-system" namespace to be "Ready" ...
	I0507 19:58:04.267429    5068 request.go:629] Waited for 198.6185ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pzn8q
	I0507 19:58:04.267429    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pzn8q
	I0507 19:58:04.267429    5068 round_trippers.go:469] Request Headers:
	I0507 19:58:04.267429    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:58:04.267429    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:58:04.271454    5068 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:58:04.271454    5068 round_trippers.go:577] Response Headers:
	I0507 19:58:04.271454    5068 round_trippers.go:580]     Audit-Id: 7faaf4fa-2d4e-4ecc-9c5c-63c8847bf6dc
	I0507 19:58:04.271808    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:58:04.271808    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:58:04.271808    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:58:04.271808    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:58:04.271808    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:58:04 GMT
	I0507 19:58:04.272044    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-pzn8q","generateName":"kube-proxy-","namespace":"kube-system","uid":"f2506861-1f09-4193-b751-22a685a0b71b","resourceVersion":"1643","creationTimestamp":"2024-05-07T19:40:53Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"952e0024-0710-460c-920c-3959ceadbd10","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:40:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"952e0024-0710-460c-920c-3959ceadbd10\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6057 chars]
	I0507 19:58:04.468609    5068 request.go:629] Waited for 195.4608ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.135.22:8443/api/v1/nodes/multinode-600000-m03
	I0507 19:58:04.468790    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000-m03
	I0507 19:58:04.468790    5068 round_trippers.go:469] Request Headers:
	I0507 19:58:04.468790    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:58:04.469116    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:58:04.473924    5068 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 19:58:04.473924    5068 round_trippers.go:577] Response Headers:
	I0507 19:58:04.473924    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:58:04.473924    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:58:04.473924    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:58:04.473924    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:58:04 GMT
	I0507 19:58:04.473924    5068 round_trippers.go:580]     Audit-Id: df7c9a61-4e4d-4c7d-8a03-4157b2e4632d
	I0507 19:58:04.473924    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:58:04.473924    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000-m03","uid":"ec7533ad-814b-49fe-bc8d-a070f7fb171f","resourceVersion":"1814","creationTimestamp":"2024-05-07T19:50:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_07T19_50_26_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:50:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 4398 chars]
	I0507 19:58:04.475036    5068 pod_ready.go:97] node "multinode-600000-m03" hosting pod "kube-proxy-pzn8q" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-600000-m03" has status "Ready":"Unknown"
	I0507 19:58:04.475121    5068 pod_ready.go:81] duration metric: took 406.2963ms for pod "kube-proxy-pzn8q" in "kube-system" namespace to be "Ready" ...
	E0507 19:58:04.475121    5068 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-600000-m03" hosting pod "kube-proxy-pzn8q" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-600000-m03" has status "Ready":"Unknown"
	I0507 19:58:04.475121    5068 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-600000" in "kube-system" namespace to be "Ready" ...
	I0507 19:58:04.655780    5068 request.go:629] Waited for 180.5036ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-600000
	I0507 19:58:04.656446    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-600000
	I0507 19:58:04.656446    5068 round_trippers.go:469] Request Headers:
	I0507 19:58:04.656646    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:58:04.656646    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:58:04.659985    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:58:04.660291    5068 round_trippers.go:577] Response Headers:
	I0507 19:58:04.660291    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:58:04 GMT
	I0507 19:58:04.660377    5068 round_trippers.go:580]     Audit-Id: e204ea22-0027-4161-9883-89beddb762b5
	I0507 19:58:04.660377    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:58:04.660377    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:58:04.660377    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:58:04.660377    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:58:04.660616    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-600000","namespace":"kube-system","uid":"ec3ac949-cb83-49be-a908-c93e23135ae8","resourceVersion":"1777","creationTimestamp":"2024-05-07T19:33:44Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7c4ee79f6d4f6adb00b636f817445fef","kubernetes.io/config.mirror":"7c4ee79f6d4f6adb00b636f817445fef","kubernetes.io/config.seen":"2024-05-07T19:33:44.165677427Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5200 chars]
	I0507 19:58:04.859057    5068 request.go:629] Waited for 197.2366ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:58:04.859389    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 19:58:04.859389    5068 round_trippers.go:469] Request Headers:
	I0507 19:58:04.859389    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:58:04.859389    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:58:04.862493    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 19:58:04.862493    5068 round_trippers.go:577] Response Headers:
	I0507 19:58:04.862493    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:58:04.862493    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:58:05 GMT
	I0507 19:58:04.862493    5068 round_trippers.go:580]     Audit-Id: 6fed3ace-5d30-4f77-af54-73dc42f70992
	I0507 19:58:04.862493    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:58:04.862493    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:58:04.862493    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:58:04.862493    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1836","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0507 19:58:04.862493    5068 pod_ready.go:92] pod "kube-scheduler-multinode-600000" in "kube-system" namespace has status "Ready":"True"
	I0507 19:58:04.862493    5068 pod_ready.go:81] duration metric: took 387.2535ms for pod "kube-scheduler-multinode-600000" in "kube-system" namespace to be "Ready" ...
	I0507 19:58:04.862493    5068 pod_ready.go:38] duration metric: took 1.6003592s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0507 19:58:04.862493    5068 system_svc.go:44] waiting for kubelet service to be running ....
	I0507 19:58:04.876200    5068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0507 19:58:04.903152    5068 system_svc.go:56] duration metric: took 40.6569ms WaitForService to wait for kubelet
	I0507 19:58:04.903152    5068 kubeadm.go:576] duration metric: took 8.3889257s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0507 19:58:04.903152    5068 node_conditions.go:102] verifying NodePressure condition ...
	I0507 19:58:05.061650    5068 request.go:629] Waited for 158.1675ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.135.22:8443/api/v1/nodes
	I0507 19:58:05.061772    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes
	I0507 19:58:05.061772    5068 round_trippers.go:469] Request Headers:
	I0507 19:58:05.061844    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 19:58:05.061844    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 19:58:05.064588    5068 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 19:58:05.064588    5068 round_trippers.go:577] Response Headers:
	I0507 19:58:05.064588    5068 round_trippers.go:580]     Audit-Id: 98ee5c4e-00a9-416b-81d7-7bff106e067a
	I0507 19:58:05.064588    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 19:58:05.064588    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 19:58:05.065040    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 19:58:05.065040    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 19:58:05.065040    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 19:58:05 GMT
	I0507 19:58:05.065804    5068 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"2057"},"items":[{"metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"1836","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 15606 chars]
	I0507 19:58:05.067315    5068 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0507 19:58:05.067394    5068 node_conditions.go:123] node cpu capacity is 2
	I0507 19:58:05.067394    5068 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0507 19:58:05.067394    5068 node_conditions.go:123] node cpu capacity is 2
	I0507 19:58:05.067394    5068 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0507 19:58:05.067394    5068 node_conditions.go:123] node cpu capacity is 2
	I0507 19:58:05.067394    5068 node_conditions.go:105] duration metric: took 164.2308ms to run NodePressure ...
	I0507 19:58:05.067394    5068 start.go:240] waiting for startup goroutines ...
	I0507 19:58:05.067500    5068 start.go:254] writing updated cluster config ...
	I0507 19:58:05.071915    5068 out.go:177] 
	I0507 19:58:05.074963    5068 config.go:182] Loaded profile config "ha-210800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 19:58:05.081325    5068 config.go:182] Loaded profile config "multinode-600000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 19:58:05.081862    5068 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-600000\config.json ...
	I0507 19:58:05.086886    5068 out.go:177] * Starting "multinode-600000-m03" worker node in "multinode-600000" cluster
	I0507 19:58:05.088265    5068 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0507 19:58:05.088265    5068 cache.go:56] Caching tarball of preloaded images
	I0507 19:58:05.088998    5068 preload.go:173] Found C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0507 19:58:05.088998    5068 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0507 19:58:05.089557    5068 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-600000\config.json ...
	I0507 19:58:05.093213    5068 start.go:360] acquireMachinesLock for multinode-600000-m03: {Name:mkcc4972200741852cdd82af2325146d8aedcde8 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0507 19:58:05.093826    5068 start.go:364] duration metric: took 70.6µs to acquireMachinesLock for "multinode-600000-m03"
	I0507 19:58:05.093826    5068 start.go:96] Skipping create...Using existing machine configuration
	I0507 19:58:05.093826    5068 fix.go:54] fixHost starting: m03
	I0507 19:58:05.094471    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000-m03 ).state
	I0507 19:58:06.937999    5068 main.go:141] libmachine: [stdout =====>] : Off
	
	I0507 19:58:06.937999    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:58:06.937999    5068 fix.go:112] recreateIfNeeded on multinode-600000-m03: state=Stopped err=<nil>
	W0507 19:58:06.937999    5068 fix.go:138] unexpected machine state, will restart: <nil>
	I0507 19:58:06.941811    5068 out.go:177] * Restarting existing hyperv VM for "multinode-600000-m03" ...
	I0507 19:58:06.943753    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-600000-m03
	I0507 19:58:09.715478    5068 main.go:141] libmachine: [stdout =====>] : 
	I0507 19:58:09.715527    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:58:09.715527    5068 main.go:141] libmachine: Waiting for host to start...
	I0507 19:58:09.715527    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000-m03 ).state
	I0507 19:58:11.733107    5068 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:58:11.733107    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:58:11.733107    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000-m03 ).networkadapters[0]).ipaddresses[0]
	I0507 19:58:13.964278    5068 main.go:141] libmachine: [stdout =====>] : 
	I0507 19:58:13.964278    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:58:14.972393    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000-m03 ).state
	I0507 19:58:16.926050    5068 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:58:16.926131    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:58:16.926131    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000-m03 ).networkadapters[0]).ipaddresses[0]
	I0507 19:58:19.176238    5068 main.go:141] libmachine: [stdout =====>] : 
	I0507 19:58:19.176238    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:58:20.183029    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000-m03 ).state
	I0507 19:58:22.128238    5068 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:58:22.128238    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:58:22.128238    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000-m03 ).networkadapters[0]).ipaddresses[0]
	I0507 19:58:24.359399    5068 main.go:141] libmachine: [stdout =====>] : 
	I0507 19:58:24.359399    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:58:25.362129    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000-m03 ).state
	I0507 19:58:27.347713    5068 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:58:27.348172    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:58:27.348172    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000-m03 ).networkadapters[0]).ipaddresses[0]
	I0507 19:58:29.597995    5068 main.go:141] libmachine: [stdout =====>] : 
	I0507 19:58:29.598989    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:58:30.603170    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000-m03 ).state
	I0507 19:58:32.571931    5068 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:58:32.571931    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:58:32.571931    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000-m03 ).networkadapters[0]).ipaddresses[0]
	I0507 19:58:34.887698    5068 main.go:141] libmachine: [stdout =====>] : 172.19.142.217
	
	I0507 19:58:34.887698    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:58:34.889417    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000-m03 ).state
	I0507 19:58:36.813331    5068 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:58:36.813380    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:58:36.813380    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000-m03 ).networkadapters[0]).ipaddresses[0]
	I0507 19:58:39.086985    5068 main.go:141] libmachine: [stdout =====>] : 172.19.142.217
	
	I0507 19:58:39.087518    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:58:39.087730    5068 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-600000\config.json ...
	I0507 19:58:39.090466    5068 machine.go:94] provisionDockerMachine start ...
	I0507 19:58:39.090616    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000-m03 ).state
	I0507 19:58:41.031163    5068 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:58:41.031255    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:58:41.031255    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000-m03 ).networkadapters[0]).ipaddresses[0]
	I0507 19:58:43.318931    5068 main.go:141] libmachine: [stdout =====>] : 172.19.142.217
	
	I0507 19:58:43.319542    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:58:43.324919    5068 main.go:141] libmachine: Using SSH client type: native
	I0507 19:58:43.325412    5068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.142.217 22 <nil> <nil>}
	I0507 19:58:43.325508    5068 main.go:141] libmachine: About to run SSH command:
	hostname
	I0507 19:58:43.466030    5068 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0507 19:58:43.466196    5068 buildroot.go:166] provisioning hostname "multinode-600000-m03"
	I0507 19:58:43.466196    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000-m03 ).state
	I0507 19:58:45.369464    5068 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:58:45.369534    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:58:45.369601    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000-m03 ).networkadapters[0]).ipaddresses[0]
	I0507 19:58:47.642643    5068 main.go:141] libmachine: [stdout =====>] : 172.19.142.217
	
	I0507 19:58:47.642834    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:58:47.648635    5068 main.go:141] libmachine: Using SSH client type: native
	I0507 19:58:47.649243    5068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.142.217 22 <nil> <nil>}
	I0507 19:58:47.649243    5068 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-600000-m03 && echo "multinode-600000-m03" | sudo tee /etc/hostname
	I0507 19:58:47.820550    5068 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-600000-m03
	
	I0507 19:58:47.820774    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000-m03 ).state
	I0507 19:58:49.714539    5068 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:58:49.714539    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:58:49.714634    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000-m03 ).networkadapters[0]).ipaddresses[0]
	I0507 19:58:51.963416    5068 main.go:141] libmachine: [stdout =====>] : 172.19.142.217
	
	I0507 19:58:51.963416    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:58:51.965294    5068 main.go:141] libmachine: Using SSH client type: native
	I0507 19:58:51.965294    5068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.142.217 22 <nil> <nil>}
	I0507 19:58:51.965294    5068 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-600000-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-600000-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-600000-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0507 19:58:52.103964    5068 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0507 19:58:52.103964    5068 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube5\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube5\minikube-integration\.minikube}
	I0507 19:58:52.103964    5068 buildroot.go:174] setting up certificates
	I0507 19:58:52.103964    5068 provision.go:84] configureAuth start
	I0507 19:58:52.104492    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000-m03 ).state
	I0507 19:58:53.992083    5068 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:58:53.992083    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:58:53.992780    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000-m03 ).networkadapters[0]).ipaddresses[0]
	I0507 19:58:56.250739    5068 main.go:141] libmachine: [stdout =====>] : 172.19.142.217
	
	I0507 19:58:56.250739    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:58:56.251666    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000-m03 ).state
	I0507 19:58:58.130468    5068 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:58:58.130468    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:58:58.131246    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000-m03 ).networkadapters[0]).ipaddresses[0]
	I0507 19:59:00.402460    5068 main.go:141] libmachine: [stdout =====>] : 172.19.142.217
	
	I0507 19:59:00.403115    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:59:00.403115    5068 provision.go:143] copyHostCerts
	I0507 19:59:00.403199    5068 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem
	I0507 19:59:00.403379    5068 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem, removing ...
	I0507 19:59:00.403379    5068 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.pem
	I0507 19:59:00.403781    5068 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/ca.pem (1082 bytes)
	I0507 19:59:00.404660    5068 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem
	I0507 19:59:00.404892    5068 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem, removing ...
	I0507 19:59:00.404950    5068 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\cert.pem
	I0507 19:59:00.405161    5068 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0507 19:59:00.406173    5068 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem
	I0507 19:59:00.406340    5068 exec_runner.go:144] found C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem, removing ...
	I0507 19:59:00.406340    5068 exec_runner.go:203] rm: C:\Users\jenkins.minikube5\minikube-integration\.minikube\key.pem
	I0507 19:59:00.406697    5068 exec_runner.go:151] cp: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube5\minikube-integration\.minikube/key.pem (1679 bytes)
	I0507 19:59:00.406990    5068 provision.go:117] generating server cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-600000-m03 san=[127.0.0.1 172.19.142.217 localhost minikube multinode-600000-m03]
	I0507 19:59:00.568831    5068 provision.go:177] copyRemoteCerts
	I0507 19:59:00.577056    5068 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0507 19:59:00.577056    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000-m03 ).state
	I0507 19:59:02.512039    5068 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:59:02.512039    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:59:02.512872    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000-m03 ).networkadapters[0]).ipaddresses[0]
	I0507 19:59:04.749458    5068 main.go:141] libmachine: [stdout =====>] : 172.19.142.217
	
	I0507 19:59:04.749458    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:59:04.750662    5068 sshutil.go:53] new ssh client: &{IP:172.19.142.217 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-600000-m03\id_rsa Username:docker}
	I0507 19:59:04.861332    5068 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.2839833s)
	I0507 19:59:04.861468    5068 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0507 19:59:04.861755    5068 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0507 19:59:04.905628    5068 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0507 19:59:04.905628    5068 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0507 19:59:04.951282    5068 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0507 19:59:04.951515    5068 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0507 19:59:04.995151    5068 provision.go:87] duration metric: took 12.8903068s to configureAuth
	I0507 19:59:04.995151    5068 buildroot.go:189] setting minikube options for container-runtime
	I0507 19:59:04.995842    5068 config.go:182] Loaded profile config "multinode-600000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 19:59:04.995914    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000-m03 ).state
	I0507 19:59:06.901155    5068 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:59:06.901600    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:59:06.901600    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000-m03 ).networkadapters[0]).ipaddresses[0]
	I0507 19:59:09.173857    5068 main.go:141] libmachine: [stdout =====>] : 172.19.142.217
	
	I0507 19:59:09.173857    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:59:09.178599    5068 main.go:141] libmachine: Using SSH client type: native
	I0507 19:59:09.178754    5068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.142.217 22 <nil> <nil>}
	I0507 19:59:09.178754    5068 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0507 19:59:09.312251    5068 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0507 19:59:09.312251    5068 buildroot.go:70] root file system type: tmpfs
	I0507 19:59:09.312559    5068 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0507 19:59:09.312559    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000-m03 ).state
	I0507 19:59:11.210272    5068 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:59:11.210272    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:59:11.210781    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000-m03 ).networkadapters[0]).ipaddresses[0]
	I0507 19:59:13.517409    5068 main.go:141] libmachine: [stdout =====>] : 172.19.142.217
	
	I0507 19:59:13.517802    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:59:13.521288    5068 main.go:141] libmachine: Using SSH client type: native
	I0507 19:59:13.521475    5068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.142.217 22 <nil> <nil>}
	I0507 19:59:13.521475    5068 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.19.135.22"
	Environment="NO_PROXY=172.19.135.22,172.19.128.95"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0507 19:59:13.679517    5068 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.19.135.22
	Environment=NO_PROXY=172.19.135.22,172.19.128.95
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0507 19:59:13.679616    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000-m03 ).state
	I0507 19:59:15.569849    5068 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:59:15.569849    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:59:15.570950    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000-m03 ).networkadapters[0]).ipaddresses[0]
	I0507 19:59:17.832608    5068 main.go:141] libmachine: [stdout =====>] : 172.19.142.217
	
	I0507 19:59:17.832688    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:59:17.839275    5068 main.go:141] libmachine: Using SSH client type: native
	I0507 19:59:17.839275    5068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.142.217 22 <nil> <nil>}
	I0507 19:59:17.839275    5068 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0507 19:59:20.024493    5068 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0507 19:59:20.024559    5068 machine.go:97] duration metric: took 40.9312217s to provisionDockerMachine
	I0507 19:59:20.024559    5068 start.go:293] postStartSetup for "multinode-600000-m03" (driver="hyperv")
	I0507 19:59:20.024559    5068 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0507 19:59:20.032493    5068 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0507 19:59:20.032493    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000-m03 ).state
	I0507 19:59:21.909503    5068 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:59:21.909503    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:59:21.909503    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000-m03 ).networkadapters[0]).ipaddresses[0]
	I0507 19:59:24.179401    5068 main.go:141] libmachine: [stdout =====>] : 172.19.142.217
	
	I0507 19:59:24.179401    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:59:24.179788    5068 sshutil.go:53] new ssh client: &{IP:172.19.142.217 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-600000-m03\id_rsa Username:docker}
	I0507 19:59:24.297227    5068 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.2643644s)
	I0507 19:59:24.305817    5068 ssh_runner.go:195] Run: cat /etc/os-release
	I0507 19:59:24.311829    5068 command_runner.go:130] > NAME=Buildroot
	I0507 19:59:24.312122    5068 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0507 19:59:24.312122    5068 command_runner.go:130] > ID=buildroot
	I0507 19:59:24.312122    5068 command_runner.go:130] > VERSION_ID=2023.02.9
	I0507 19:59:24.312122    5068 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0507 19:59:24.312732    5068 info.go:137] Remote host: Buildroot 2023.02.9
	I0507 19:59:24.312849    5068 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\addons for local assets ...
	I0507 19:59:24.313070    5068 filesync.go:126] Scanning C:\Users\jenkins.minikube5\minikube-integration\.minikube\files for local assets ...
	I0507 19:59:24.313654    5068 filesync.go:149] local asset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem -> 99922.pem in /etc/ssl/certs
	I0507 19:59:24.313750    5068 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem -> /etc/ssl/certs/99922.pem
	I0507 19:59:24.322669    5068 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0507 19:59:24.338689    5068 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem --> /etc/ssl/certs/99922.pem (1708 bytes)
	I0507 19:59:24.379156    5068 start.go:296] duration metric: took 4.3542994s for postStartSetup
	I0507 19:59:24.380138    5068 fix.go:56] duration metric: took 1m19.2809126s for fixHost
	I0507 19:59:24.380138    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000-m03 ).state
	I0507 19:59:26.303134    5068 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:59:26.303134    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:59:26.303134    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000-m03 ).networkadapters[0]).ipaddresses[0]
	I0507 19:59:28.630084    5068 main.go:141] libmachine: [stdout =====>] : 172.19.142.217
	
	I0507 19:59:28.631193    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:59:28.634612    5068 main.go:141] libmachine: Using SSH client type: native
	I0507 19:59:28.634612    5068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.142.217 22 <nil> <nil>}
	I0507 19:59:28.634612    5068 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0507 19:59:28.764208    5068 main.go:141] libmachine: SSH cmd err, output: <nil>: 1715111969.002479747
	
	I0507 19:59:28.764243    5068 fix.go:216] guest clock: 1715111969.002479747
	I0507 19:59:28.764243    5068 fix.go:229] Guest: 2024-05-07 19:59:29.002479747 +0000 UTC Remote: 2024-05-07 19:59:24.3801389 +0000 UTC m=+407.213984301 (delta=4.622340847s)
	I0507 19:59:28.764340    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000-m03 ).state
	I0507 19:59:30.675240    5068 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:59:30.675240    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:59:30.675316    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000-m03 ).networkadapters[0]).ipaddresses[0]
	I0507 19:59:32.922506    5068 main.go:141] libmachine: [stdout =====>] : 172.19.142.217
	
	I0507 19:59:32.922559    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:59:32.926064    5068 main.go:141] libmachine: Using SSH client type: native
	I0507 19:59:32.926064    5068 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.142.217 22 <nil> <nil>}
	I0507 19:59:32.926583    5068 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1715111968
	I0507 19:59:33.067417    5068 main.go:141] libmachine: SSH cmd err, output: <nil>: Tue May  7 19:59:28 UTC 2024
	
	I0507 19:59:33.067417    5068 fix.go:236] clock set: Tue May  7 19:59:28 UTC 2024
	 (err=<nil>)
	I0507 19:59:33.067417    5068 start.go:83] releasing machines lock for "multinode-600000-m03", held for 1m27.9675956s
	I0507 19:59:33.067417    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000-m03 ).state
	I0507 19:59:34.944681    5068 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:59:34.944681    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:59:34.944681    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000-m03 ).networkadapters[0]).ipaddresses[0]
	I0507 19:59:37.182035    5068 main.go:141] libmachine: [stdout =====>] : 172.19.142.217
	
	I0507 19:59:37.182341    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:59:37.185554    5068 out.go:177] * Found network options:
	I0507 19:59:37.189868    5068 out.go:177]   - NO_PROXY=172.19.135.22,172.19.128.95
	W0507 19:59:37.192007    5068 proxy.go:119] fail to check proxy env: Error ip not in block
	W0507 19:59:37.192007    5068 proxy.go:119] fail to check proxy env: Error ip not in block
	I0507 19:59:37.193875    5068 out.go:177]   - NO_PROXY=172.19.135.22,172.19.128.95
	W0507 19:59:37.195902    5068 proxy.go:119] fail to check proxy env: Error ip not in block
	W0507 19:59:37.195902    5068 proxy.go:119] fail to check proxy env: Error ip not in block
	W0507 19:59:37.197034    5068 proxy.go:119] fail to check proxy env: Error ip not in block
	W0507 19:59:37.197034    5068 proxy.go:119] fail to check proxy env: Error ip not in block
	I0507 19:59:37.199640    5068 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0507 19:59:37.199640    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000-m03 ).state
	I0507 19:59:37.206146    5068 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0507 19:59:37.206146    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000-m03 ).state
	I0507 19:59:39.161984    5068 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:59:39.162570    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:59:39.162620    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000-m03 ).networkadapters[0]).ipaddresses[0]
	I0507 19:59:39.175771    5068 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:59:39.175771    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:59:39.175771    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000-m03 ).networkadapters[0]).ipaddresses[0]
	I0507 19:59:41.552752    5068 main.go:141] libmachine: [stdout =====>] : 172.19.142.217
	
	I0507 19:59:41.552752    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:59:41.553298    5068 sshutil.go:53] new ssh client: &{IP:172.19.142.217 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-600000-m03\id_rsa Username:docker}
	I0507 19:59:41.585242    5068 main.go:141] libmachine: [stdout =====>] : 172.19.142.217
	
	I0507 19:59:41.585998    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:59:41.586381    5068 sshutil.go:53] new ssh client: &{IP:172.19.142.217 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-600000-m03\id_rsa Username:docker}
	I0507 19:59:41.646519    5068 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0507 19:59:41.647655    5068 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (4.4412045s)
	W0507 19:59:41.647694    5068 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0507 19:59:41.656346    5068 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0507 19:59:41.724306    5068 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0507 19:59:41.724422    5068 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.5244706s)
	I0507 19:59:41.724573    5068 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0507 19:59:41.724573    5068 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0507 19:59:41.724573    5068 start.go:494] detecting cgroup driver to use...
	I0507 19:59:41.725117    5068 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0507 19:59:41.755681    5068 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0507 19:59:41.764564    5068 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0507 19:59:41.790521    5068 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0507 19:59:41.807658    5068 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0507 19:59:41.817092    5068 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0507 19:59:41.843036    5068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0507 19:59:41.870997    5068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0507 19:59:41.897530    5068 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0507 19:59:41.924562    5068 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0507 19:59:41.950670    5068 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0507 19:59:41.976307    5068 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0507 19:59:42.003573    5068 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0507 19:59:42.029495    5068 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0507 19:59:42.046646    5068 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0507 19:59:42.057500    5068 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0507 19:59:42.081199    5068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 19:59:42.270380    5068 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0507 19:59:42.301648    5068 start.go:494] detecting cgroup driver to use...
	I0507 19:59:42.309724    5068 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0507 19:59:42.328594    5068 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0507 19:59:42.328966    5068 command_runner.go:130] > [Unit]
	I0507 19:59:42.329063    5068 command_runner.go:130] > Description=Docker Application Container Engine
	I0507 19:59:42.329063    5068 command_runner.go:130] > Documentation=https://docs.docker.com
	I0507 19:59:42.329063    5068 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0507 19:59:42.329120    5068 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0507 19:59:42.329120    5068 command_runner.go:130] > StartLimitBurst=3
	I0507 19:59:42.329120    5068 command_runner.go:130] > StartLimitIntervalSec=60
	I0507 19:59:42.329120    5068 command_runner.go:130] > [Service]
	I0507 19:59:42.329188    5068 command_runner.go:130] > Type=notify
	I0507 19:59:42.329395    5068 command_runner.go:130] > Restart=on-failure
	I0507 19:59:42.329472    5068 command_runner.go:130] > Environment=NO_PROXY=172.19.135.22
	I0507 19:59:42.329472    5068 command_runner.go:130] > Environment=NO_PROXY=172.19.135.22,172.19.128.95
	I0507 19:59:42.329536    5068 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0507 19:59:42.329536    5068 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0507 19:59:42.329610    5068 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0507 19:59:42.329610    5068 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0507 19:59:42.329674    5068 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0507 19:59:42.329674    5068 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0507 19:59:42.329739    5068 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0507 19:59:42.329739    5068 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0507 19:59:42.329796    5068 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0507 19:59:42.329796    5068 command_runner.go:130] > ExecStart=
	I0507 19:59:42.329923    5068 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0507 19:59:42.329923    5068 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0507 19:59:42.329964    5068 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0507 19:59:42.329964    5068 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0507 19:59:42.329964    5068 command_runner.go:130] > LimitNOFILE=infinity
	I0507 19:59:42.330052    5068 command_runner.go:130] > LimitNPROC=infinity
	I0507 19:59:42.330052    5068 command_runner.go:130] > LimitCORE=infinity
	I0507 19:59:42.330052    5068 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0507 19:59:42.330052    5068 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0507 19:59:42.330052    5068 command_runner.go:130] > TasksMax=infinity
	I0507 19:59:42.330146    5068 command_runner.go:130] > TimeoutStartSec=0
	I0507 19:59:42.330146    5068 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0507 19:59:42.330146    5068 command_runner.go:130] > Delegate=yes
	I0507 19:59:42.330215    5068 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0507 19:59:42.330215    5068 command_runner.go:130] > KillMode=process
	I0507 19:59:42.330215    5068 command_runner.go:130] > [Install]
	I0507 19:59:42.330215    5068 command_runner.go:130] > WantedBy=multi-user.target
	I0507 19:59:42.340327    5068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0507 19:59:42.367627    5068 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0507 19:59:42.396283    5068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0507 19:59:42.426662    5068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0507 19:59:42.458633    5068 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0507 19:59:42.512741    5068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0507 19:59:42.534718    5068 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0507 19:59:42.563618    5068 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0507 19:59:42.572432    5068 ssh_runner.go:195] Run: which cri-dockerd
	I0507 19:59:42.578114    5068 command_runner.go:130] > /usr/bin/cri-dockerd
	I0507 19:59:42.586997    5068 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0507 19:59:42.603618    5068 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0507 19:59:42.642027    5068 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0507 19:59:42.819765    5068 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0507 19:59:43.003710    5068 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0507 19:59:43.003761    5068 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0507 19:59:43.041946    5068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 19:59:43.227280    5068 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0507 19:59:45.796755    5068 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5692978s)
	I0507 19:59:45.805168    5068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0507 19:59:45.834848    5068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0507 19:59:45.868768    5068 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0507 19:59:46.058535    5068 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0507 19:59:46.235837    5068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 19:59:46.416530    5068 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0507 19:59:46.450351    5068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0507 19:59:46.478626    5068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 19:59:46.660294    5068 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0507 19:59:46.755986    5068 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0507 19:59:46.764706    5068 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0507 19:59:46.773041    5068 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0507 19:59:46.773041    5068 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0507 19:59:46.773041    5068 command_runner.go:130] > Device: 0,22	Inode: 855         Links: 1
	I0507 19:59:46.773041    5068 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0507 19:59:46.773041    5068 command_runner.go:130] > Access: 2024-05-07 19:59:46.925605595 +0000
	I0507 19:59:46.773041    5068 command_runner.go:130] > Modify: 2024-05-07 19:59:46.925605595 +0000
	I0507 19:59:46.773155    5068 command_runner.go:130] > Change: 2024-05-07 19:59:46.928605770 +0000
	I0507 19:59:46.773155    5068 command_runner.go:130] >  Birth: -
	I0507 19:59:46.773155    5068 start.go:562] Will wait 60s for crictl version
	I0507 19:59:46.781392    5068 ssh_runner.go:195] Run: which crictl
	I0507 19:59:46.786653    5068 command_runner.go:130] > /usr/bin/crictl
	I0507 19:59:46.795076    5068 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0507 19:59:46.845680    5068 command_runner.go:130] > Version:  0.1.0
	I0507 19:59:46.845680    5068 command_runner.go:130] > RuntimeName:  docker
	I0507 19:59:46.845680    5068 command_runner.go:130] > RuntimeVersion:  26.0.2
	I0507 19:59:46.845754    5068 command_runner.go:130] > RuntimeApiVersion:  v1
	I0507 19:59:46.845754    5068 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0507 19:59:46.856588    5068 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0507 19:59:46.884703    5068 command_runner.go:130] > 26.0.2
	I0507 19:59:46.891391    5068 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0507 19:59:46.919850    5068 command_runner.go:130] > 26.0.2
	I0507 19:59:46.924870    5068 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0507 19:59:46.927292    5068 out.go:177]   - env NO_PROXY=172.19.135.22
	I0507 19:59:46.930367    5068 out.go:177]   - env NO_PROXY=172.19.135.22,172.19.128.95
	I0507 19:59:46.932578    5068 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0507 19:59:46.936254    5068 ip.go:186] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0507 19:59:46.936254    5068 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0507 19:59:46.936254    5068 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0507 19:59:46.936254    5068 ip.go:207] Found interface: {Index:7 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:a3:a5:4f Flags:up|broadcast|multicast|running}
	I0507 19:59:46.938222    5068 ip.go:210] interface addr: fe80::1edb:f5fd:c218:d8d2/64
	I0507 19:59:46.938761    5068 ip.go:210] interface addr: 172.19.128.1/20
	I0507 19:59:46.947906    5068 ssh_runner.go:195] Run: grep 172.19.128.1	host.minikube.internal$ /etc/hosts
	I0507 19:59:46.953820    5068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.19.128.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0507 19:59:46.973851    5068 mustload.go:65] Loading cluster: multinode-600000
	I0507 19:59:46.973923    5068 config.go:182] Loaded profile config "multinode-600000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 19:59:46.974618    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000 ).state
	I0507 19:59:48.876774    5068 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:59:48.876828    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:59:48.876828    5068 host.go:66] Checking if "multinode-600000" exists ...
	I0507 19:59:48.877351    5068 certs.go:68] Setting up C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\multinode-600000 for IP: 172.19.142.217
	I0507 19:59:48.877351    5068 certs.go:194] generating shared ca certs ...
	I0507 19:59:48.877351    5068 certs.go:226] acquiring lock for ca certs: {Name:mkcb932913049efe02d6e38fc2201d3c46b3b4ae Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 19:59:48.877622    5068 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key
	I0507 19:59:48.877622    5068 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key
	I0507 19:59:48.878246    5068 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0507 19:59:48.878307    5068 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0507 19:59:48.878307    5068 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0507 19:59:48.878960    5068 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0507 19:59:48.878960    5068 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\9992.pem (1338 bytes)
	W0507 19:59:48.879543    5068 certs.go:480] ignoring C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\9992_empty.pem, impossibly tiny 0 bytes
	I0507 19:59:48.879543    5068 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0507 19:59:48.880130    5068 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\ca.pem (1082 bytes)
	I0507 19:59:48.880130    5068 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0507 19:59:48.880716    5068 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0507 19:59:48.881413    5068 certs.go:484] found cert: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem (1708 bytes)
	I0507 19:59:48.881485    5068 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\9992.pem -> /usr/share/ca-certificates/9992.pem
	I0507 19:59:48.881485    5068 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem -> /usr/share/ca-certificates/99922.pem
	I0507 19:59:48.881485    5068 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0507 19:59:48.882186    5068 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0507 19:59:48.927563    5068 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0507 19:59:48.977755    5068 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0507 19:59:49.024779    5068 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0507 19:59:49.070685    5068 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\certs\9992.pem --> /usr/share/ca-certificates/9992.pem (1338 bytes)
	I0507 19:59:49.114791    5068 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\ssl\certs\99922.pem --> /usr/share/ca-certificates/99922.pem (1708 bytes)
	I0507 19:59:49.158027    5068 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0507 19:59:49.208797    5068 ssh_runner.go:195] Run: openssl version
	I0507 19:59:49.216410    5068 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0507 19:59:49.223801    5068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9992.pem && ln -fs /usr/share/ca-certificates/9992.pem /etc/ssl/certs/9992.pem"
	I0507 19:59:49.250314    5068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9992.pem
	I0507 19:59:49.256607    5068 command_runner.go:130] > -rw-r--r-- 1 root root 1338 May  7 18:15 /usr/share/ca-certificates/9992.pem
	I0507 19:59:49.256682    5068 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 May  7 18:15 /usr/share/ca-certificates/9992.pem
	I0507 19:59:49.264477    5068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9992.pem
	I0507 19:59:49.275181    5068 command_runner.go:130] > 51391683
	I0507 19:59:49.283125    5068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9992.pem /etc/ssl/certs/51391683.0"
	I0507 19:59:49.308464    5068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/99922.pem && ln -fs /usr/share/ca-certificates/99922.pem /etc/ssl/certs/99922.pem"
	I0507 19:59:49.334429    5068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/99922.pem
	I0507 19:59:49.341204    5068 command_runner.go:130] > -rw-r--r-- 1 root root 1708 May  7 18:15 /usr/share/ca-certificates/99922.pem
	I0507 19:59:49.341283    5068 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 May  7 18:15 /usr/share/ca-certificates/99922.pem
	I0507 19:59:49.348730    5068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/99922.pem
	I0507 19:59:49.357079    5068 command_runner.go:130] > 3ec20f2e
	I0507 19:59:49.364800    5068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/99922.pem /etc/ssl/certs/3ec20f2e.0"
	I0507 19:59:49.394081    5068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0507 19:59:49.419733    5068 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0507 19:59:49.427371    5068 command_runner.go:130] > -rw-r--r-- 1 root root 1111 May  7 18:01 /usr/share/ca-certificates/minikubeCA.pem
	I0507 19:59:49.427371    5068 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 May  7 18:01 /usr/share/ca-certificates/minikubeCA.pem
	I0507 19:59:49.435872    5068 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0507 19:59:49.443710    5068 command_runner.go:130] > b5213941
	I0507 19:59:49.452600    5068 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0507 19:59:49.477903    5068 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0507 19:59:49.484045    5068 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0507 19:59:49.484264    5068 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0507 19:59:49.484264    5068 kubeadm.go:928] updating node {m03 172.19.142.217 0 v1.30.0  false true} ...
	I0507 19:59:49.484264    5068 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-600000-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.19.142.217
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:multinode-600000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0507 19:59:49.493206    5068 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0507 19:59:49.511129    5068 command_runner.go:130] > kubeadm
	I0507 19:59:49.511219    5068 command_runner.go:130] > kubectl
	I0507 19:59:49.511219    5068 command_runner.go:130] > kubelet
	I0507 19:59:49.511327    5068 binaries.go:44] Found k8s binaries, skipping transfer
	I0507 19:59:49.520030    5068 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0507 19:59:49.535921    5068 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I0507 19:59:49.563820    5068 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0507 19:59:49.599564    5068 ssh_runner.go:195] Run: grep 172.19.135.22	control-plane.minikube.internal$ /etc/hosts
	I0507 19:59:49.606146    5068 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.19.135.22	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0507 19:59:49.633568    5068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 19:59:49.821270    5068 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0507 19:59:49.849902    5068 host.go:66] Checking if "multinode-600000" exists ...
	I0507 19:59:49.850249    5068 start.go:316] joinCluster: &{Name:multinode-600000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
0 ClusterName:multinode-600000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.135.22 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.19.128.95 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.19.142.217 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0507 19:59:49.850249    5068 start.go:329] removing existing worker node "m03" before attempting to rejoin cluster: &{Name:m03 IP:172.19.142.217 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}
	I0507 19:59:49.850793    5068 host.go:66] Checking if "multinode-600000-m03" exists ...
	I0507 19:59:49.851316    5068 mustload.go:65] Loading cluster: multinode-600000
	I0507 19:59:49.851728    5068 config.go:182] Loaded profile config "multinode-600000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 19:59:49.852168    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000 ).state
	I0507 19:59:51.745044    5068 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:59:51.745044    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:59:51.745044    5068 host.go:66] Checking if "multinode-600000" exists ...
	I0507 19:59:51.745462    5068 api_server.go:166] Checking apiserver status ...
	I0507 19:59:51.757568    5068 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0507 19:59:51.757568    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000 ).state
	I0507 19:59:53.687690    5068 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:59:53.687690    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:59:53.687892    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000 ).networkadapters[0]).ipaddresses[0]
	I0507 19:59:55.986353    5068 main.go:141] libmachine: [stdout =====>] : 172.19.135.22
	
	I0507 19:59:55.986353    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:59:55.986992    5068 sshutil.go:53] new ssh client: &{IP:172.19.135.22 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-600000\id_rsa Username:docker}
	I0507 19:59:56.094819    5068 command_runner.go:130] > 1882
	I0507 19:59:56.095076    5068 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (4.3372096s)
	I0507 19:59:56.103491    5068 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1882/cgroup
	W0507 19:59:56.120852    5068 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1882/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0507 19:59:56.129205    5068 ssh_runner.go:195] Run: ls
	I0507 19:59:56.136363    5068 api_server.go:253] Checking apiserver healthz at https://172.19.135.22:8443/healthz ...
	I0507 19:59:56.142976    5068 api_server.go:279] https://172.19.135.22:8443/healthz returned 200:
	ok
	I0507 19:59:56.154784    5068 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl drain multinode-600000-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data
	I0507 19:59:56.287883    5068 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-dkxzt, kube-system/kube-proxy-pzn8q
	I0507 19:59:56.290143    5068 command_runner.go:130] > node/multinode-600000-m03 cordoned
	I0507 19:59:56.290143    5068 command_runner.go:130] > node/multinode-600000-m03 drained
	I0507 19:59:56.290506    5068 node.go:128] successfully drained node "multinode-600000-m03"
	I0507 19:59:56.290506    5068 ssh_runner.go:195] Run: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock"
	I0507 19:59:56.290605    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000-m03 ).state
	I0507 19:59:58.184228    5068 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:59:58.184752    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:59:58.184752    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000-m03 ).networkadapters[0]).ipaddresses[0]
	I0507 20:00:00.461825    5068 main.go:141] libmachine: [stdout =====>] : 172.19.142.217
	
	I0507 20:00:00.462717    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 20:00:00.463132    5068 sshutil.go:53] new ssh client: &{IP:172.19.142.217 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-600000-m03\id_rsa Username:docker}
	I0507 20:00:00.861672    5068 command_runner.go:130] ! W0507 20:00:01.101246    1486 removeetcdmember.go:106] [reset] No kubeadm config, using etcd pod spec to get data directory
	I0507 20:00:01.214963    5068 command_runner.go:130] > [preflight] Running pre-flight checks
	I0507 20:00:01.215143    5068 command_runner.go:130] > [reset] Deleted contents of the etcd data directory: /var/lib/etcd
	I0507 20:00:01.215143    5068 command_runner.go:130] > [reset] Stopping the kubelet service
	I0507 20:00:01.215143    5068 command_runner.go:130] > [reset] Unmounting mounted directories in "/var/lib/kubelet"
	I0507 20:00:01.215255    5068 command_runner.go:130] > [reset] Deleting contents of directories: [/etc/kubernetes/manifests /var/lib/kubelet /etc/kubernetes/pki]
	I0507 20:00:01.215255    5068 command_runner.go:130] > [reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/super-admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/bootstrap-kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf]
	I0507 20:00:01.215255    5068 command_runner.go:130] > The reset process does not clean CNI configuration. To do so, you must remove /etc/cni/net.d
	I0507 20:00:01.215255    5068 command_runner.go:130] > The reset process does not reset or clean up iptables rules or IPVS tables.
	I0507 20:00:01.215366    5068 command_runner.go:130] > If you wish to reset iptables, you must do so manually by using the "iptables" command.
	I0507 20:00:01.215366    5068 command_runner.go:130] > If your cluster was setup to utilize IPVS, run ipvsadm --clear (or similar)
	I0507 20:00:01.215366    5068 command_runner.go:130] > to reset your system's IPVS tables.
	I0507 20:00:01.215366    5068 command_runner.go:130] > The reset process does not clean your kubeconfig files and you must remove them manually.
	I0507 20:00:01.215366    5068 command_runner.go:130] > Please, check the contents of the $HOME/.kube/config file.
	I0507 20:00:01.215491    5068 ssh_runner.go:235] Completed: /bin/bash -c "KUBECONFIG=/var/lib/minikube/kubeconfig sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm reset --force --ignore-preflight-errors=all --cri-socket=unix:///var/run/cri-dockerd.sock": (4.9246466s)
	I0507 20:00:01.215491    5068 node.go:155] successfully reset node "multinode-600000-m03"
	I0507 20:00:01.217428    5068 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0507 20:00:01.218360    5068 kapi.go:59] client config for multinode-600000: &rest.Config{Host:"https://172.19.135.22:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-600000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-600000\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2655b00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0507 20:00:01.219434    5068 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0507 20:00:01.219540    5068 round_trippers.go:463] DELETE https://172.19.135.22:8443/api/v1/nodes/multinode-600000-m03
	I0507 20:00:01.219540    5068 round_trippers.go:469] Request Headers:
	I0507 20:00:01.219623    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 20:00:01.219623    5068 round_trippers.go:473]     Content-Type: application/json
	I0507 20:00:01.219623    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 20:00:01.237641    5068 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0507 20:00:01.237641    5068 round_trippers.go:577] Response Headers:
	I0507 20:00:01.237958    5068 round_trippers.go:580]     Audit-Id: dd4dc2a4-2efc-47f2-93c8-3333a17250f2
	I0507 20:00:01.237958    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 20:00:01.237958    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 20:00:01.237958    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 20:00:01.237958    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 20:00:01.237958    5068 round_trippers.go:580]     Content-Length: 171
	I0507 20:00:01.237958    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 20:00:01 GMT
	I0507 20:00:01.237958    5068 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-600000-m03","kind":"nodes","uid":"ec7533ad-814b-49fe-bc8d-a070f7fb171f"}}
	I0507 20:00:01.238050    5068 node.go:180] successfully deleted node "multinode-600000-m03"
	I0507 20:00:01.238050    5068 start.go:333] successfully removed existing worker node "m03" from cluster: &{Name:m03 IP:172.19.142.217 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}
	I0507 20:00:01.238050    5068 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0507 20:00:01.238050    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000 ).state
	I0507 20:00:03.193661    5068 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 20:00:03.193661    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 20:00:03.193738    5068 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000 ).networkadapters[0]).ipaddresses[0]
	I0507 20:00:05.469641    5068 main.go:141] libmachine: [stdout =====>] : 172.19.135.22
	
	I0507 20:00:05.469641    5068 main.go:141] libmachine: [stderr =====>] : 
	I0507 20:00:05.469767    5068 sshutil.go:53] new ssh client: &{IP:172.19.135.22 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-600000\id_rsa Username:docker}
	I0507 20:00:05.660365    5068 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token xz9hf4.9rxqo3md21dm2ngc --discovery-token-ca-cert-hash sha256:931f752ca063cc161db9d00a66e1e235f9a673b9dc0e49228e9ec99d810de7b1 
	I0507 20:00:05.660481    5068 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm token create --print-join-command --ttl=0": (4.4221267s)
	I0507 20:00:05.660481    5068 start.go:342] trying to join worker node "m03" to cluster: &{Name:m03 IP:172.19.142.217 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}
	I0507 20:00:05.660481    5068 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token xz9hf4.9rxqo3md21dm2ngc --discovery-token-ca-cert-hash sha256:931f752ca063cc161db9d00a66e1e235f9a673b9dc0e49228e9ec99d810de7b1 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-600000-m03"
	I0507 20:00:05.857514    5068 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0507 20:00:07.206684    5068 command_runner.go:130] > [preflight] Running pre-flight checks
	I0507 20:00:07.207606    5068 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0507 20:00:07.207606    5068 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0507 20:00:07.207606    5068 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0507 20:00:07.207606    5068 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0507 20:00:07.207670    5068 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0507 20:00:07.207670    5068 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0507 20:00:07.207670    5068 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.001902959s
	I0507 20:00:07.207670    5068 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0507 20:00:07.207670    5068 command_runner.go:130] > This node has joined the cluster:
	I0507 20:00:07.207670    5068 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0507 20:00:07.207670    5068 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0507 20:00:07.207670    5068 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0507 20:00:07.207741    5068 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm join control-plane.minikube.internal:8443 --token xz9hf4.9rxqo3md21dm2ngc --discovery-token-ca-cert-hash sha256:931f752ca063cc161db9d00a66e1e235f9a673b9dc0e49228e9ec99d810de7b1 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-600000-m03": (1.5470486s)
	I0507 20:00:07.207812    5068 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0507 20:00:07.586979    5068 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0507 20:00:07.595453    5068 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-600000-m03 minikube.k8s.io/updated_at=2024_05_07T20_00_07_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=a2bee053733709aad5480b65159f65519e411d9f minikube.k8s.io/name=multinode-600000 minikube.k8s.io/primary=false
	I0507 20:00:07.713650    5068 command_runner.go:130] > node/multinode-600000-m03 labeled
	I0507 20:00:07.713650    5068 start.go:318] duration metric: took 17.8621738s to joinCluster
	I0507 20:00:07.713650    5068 start.go:234] Will wait 6m0s for node &{Name:m03 IP:172.19.142.217 Port:0 KubernetesVersion:v1.30.0 ContainerRuntime: ControlPlane:false Worker:true}
	I0507 20:00:07.717332    5068 out.go:177] * Verifying Kubernetes components...
	I0507 20:00:07.714657    5068 config.go:182] Loaded profile config "multinode-600000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 20:00:07.730249    5068 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0507 20:00:07.928034    5068 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0507 20:00:07.961400    5068 loader.go:395] Config loaded from file:  C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0507 20:00:07.962200    5068 kapi.go:59] client config for multinode-600000: &rest.Config{Host:"https://172.19.135.22:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-600000\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\profiles\\multinode-600000\\client.key", CAFile:"C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2655b00), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0507 20:00:07.962628    5068 node_ready.go:35] waiting up to 6m0s for node "multinode-600000-m03" to be "Ready" ...
	I0507 20:00:07.963249    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000-m03
	I0507 20:00:07.963249    5068 round_trippers.go:469] Request Headers:
	I0507 20:00:07.963321    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 20:00:07.963321    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 20:00:08.009554    5068 round_trippers.go:574] Response Status: 200 OK in 46 milliseconds
	I0507 20:00:08.009554    5068 round_trippers.go:577] Response Headers:
	I0507 20:00:08.009554    5068 round_trippers.go:580]     Audit-Id: ea75e033-d684-41b4-b912-4093553e23a2
	I0507 20:00:08.010109    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 20:00:08.010109    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 20:00:08.010109    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 20:00:08.010109    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 20:00:08.010109    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 20:00:08 GMT
	I0507 20:00:08.010922    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000-m03","uid":"aba77896-4856-4a21-886e-724f9e5e85e9","resourceVersion":"2205","creationTimestamp":"2024-05-07T20:00:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_07T20_00_07_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T20:00:07Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3397 chars]
	I0507 20:00:08.466483    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000-m03
	I0507 20:00:08.466483    5068 round_trippers.go:469] Request Headers:
	I0507 20:00:08.466483    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 20:00:08.466483    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 20:00:08.470763    5068 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0507 20:00:08.470763    5068 round_trippers.go:577] Response Headers:
	I0507 20:00:08.470763    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 20:00:08.470763    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 20:00:08.470763    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 20:00:08.470763    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 20:00:08 GMT
	I0507 20:00:08.470763    5068 round_trippers.go:580]     Audit-Id: 4c46a208-6053-4271-a508-cb17b643556c
	I0507 20:00:08.470763    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 20:00:08.470763    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000-m03","uid":"aba77896-4856-4a21-886e-724f9e5e85e9","resourceVersion":"2205","creationTimestamp":"2024-05-07T20:00:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_07T20_00_07_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T20:00:07Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3397 chars]
	I0507 20:00:08.972837    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000-m03
	I0507 20:00:08.972906    5068 round_trippers.go:469] Request Headers:
	I0507 20:00:08.972906    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 20:00:08.972906    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 20:00:08.976211    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 20:00:08.976890    5068 round_trippers.go:577] Response Headers:
	I0507 20:00:08.976890    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 20:00:08.976890    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 20:00:08.976890    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 20:00:09 GMT
	I0507 20:00:08.976890    5068 round_trippers.go:580]     Audit-Id: 0886b244-f929-4ff2-896d-1662f189d06a
	I0507 20:00:08.976890    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 20:00:08.976890    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 20:00:08.977073    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000-m03","uid":"aba77896-4856-4a21-886e-724f9e5e85e9","resourceVersion":"2205","creationTimestamp":"2024-05-07T20:00:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_07T20_00_07_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T20:00:07Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3397 chars]
	I0507 20:00:09.477997    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000-m03
	I0507 20:00:09.477997    5068 round_trippers.go:469] Request Headers:
	I0507 20:00:09.477997    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 20:00:09.477997    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 20:00:09.481322    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 20:00:09.482342    5068 round_trippers.go:577] Response Headers:
	I0507 20:00:09.482342    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 20:00:09 GMT
	I0507 20:00:09.482342    5068 round_trippers.go:580]     Audit-Id: 00ec540c-e64d-48b5-8bbe-cde34685f394
	I0507 20:00:09.482342    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 20:00:09.482342    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 20:00:09.482342    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 20:00:09.482342    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 20:00:09.482727    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000-m03","uid":"aba77896-4856-4a21-886e-724f9e5e85e9","resourceVersion":"2205","creationTimestamp":"2024-05-07T20:00:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_07T20_00_07_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T20:00:07Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3397 chars]
	I0507 20:00:09.977128    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000-m03
	I0507 20:00:09.977274    5068 round_trippers.go:469] Request Headers:
	I0507 20:00:09.977274    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 20:00:09.977274    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 20:00:09.980598    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 20:00:09.981259    5068 round_trippers.go:577] Response Headers:
	I0507 20:00:09.981259    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 20:00:09.981259    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 20:00:09.981259    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 20:00:10 GMT
	I0507 20:00:09.981259    5068 round_trippers.go:580]     Audit-Id: 0fac696b-4a8f-41fe-b4cc-37784497579d
	I0507 20:00:09.981259    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 20:00:09.981259    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 20:00:09.981950    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000-m03","uid":"aba77896-4856-4a21-886e-724f9e5e85e9","resourceVersion":"2205","creationTimestamp":"2024-05-07T20:00:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_07T20_00_07_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T20:00:07Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3397 chars]
	I0507 20:00:09.982312    5068 node_ready.go:53] node "multinode-600000-m03" has status "Ready":"False"
	I0507 20:00:10.465325    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000-m03
	I0507 20:00:10.465325    5068 round_trippers.go:469] Request Headers:
	I0507 20:00:10.465325    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 20:00:10.465325    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 20:00:10.469070    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 20:00:10.469070    5068 round_trippers.go:577] Response Headers:
	I0507 20:00:10.469070    5068 round_trippers.go:580]     Audit-Id: 9f88d2ac-50aa-4652-a633-c0ae48e6f5fc
	I0507 20:00:10.469070    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 20:00:10.469070    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 20:00:10.469070    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 20:00:10.469070    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 20:00:10.469177    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 20:00:10 GMT
	I0507 20:00:10.469177    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000-m03","uid":"aba77896-4856-4a21-886e-724f9e5e85e9","resourceVersion":"2205","creationTimestamp":"2024-05-07T20:00:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_07T20_00_07_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T20:00:07Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3397 chars]
	I0507 20:00:10.963659    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000-m03
	I0507 20:00:10.963659    5068 round_trippers.go:469] Request Headers:
	I0507 20:00:10.963659    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 20:00:10.963659    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 20:00:10.967315    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 20:00:10.967771    5068 round_trippers.go:577] Response Headers:
	I0507 20:00:10.967771    5068 round_trippers.go:580]     Audit-Id: e00191f3-eb79-4c04-ac40-7d08500bf715
	I0507 20:00:10.967771    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 20:00:10.967771    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 20:00:10.967771    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 20:00:10.967771    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 20:00:10.967771    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 20:00:11 GMT
	I0507 20:00:10.967969    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000-m03","uid":"aba77896-4856-4a21-886e-724f9e5e85e9","resourceVersion":"2205","creationTimestamp":"2024-05-07T20:00:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_07T20_00_07_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T20:00:07Z","fieldsType":"FieldsV1","fieldsV1"
:{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}}, [truncated 3397 chars]
	I0507 20:00:11.464738    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000-m03
	I0507 20:00:11.464738    5068 round_trippers.go:469] Request Headers:
	I0507 20:00:11.464738    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 20:00:11.464738    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 20:00:11.467963    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 20:00:11.468428    5068 round_trippers.go:577] Response Headers:
	I0507 20:00:11.468522    5068 round_trippers.go:580]     Audit-Id: 8206a195-cbf2-4f6e-89f7-03c296e7dd43
	I0507 20:00:11.468522    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 20:00:11.468854    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 20:00:11.469074    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 20:00:11.469074    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 20:00:11.469074    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 20:00:11 GMT
	I0507 20:00:11.469604    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000-m03","uid":"aba77896-4856-4a21-886e-724f9e5e85e9","resourceVersion":"2227","creationTimestamp":"2024-05-07T20:00:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_07T20_00_07_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-07T20:00:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3506 chars]
	I0507 20:00:11.964045    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000-m03
	I0507 20:00:11.964045    5068 round_trippers.go:469] Request Headers:
	I0507 20:00:11.964045    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 20:00:11.964045    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 20:00:11.966782    5068 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 20:00:11.967731    5068 round_trippers.go:577] Response Headers:
	I0507 20:00:11.967731    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 20:00:11.967731    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 20:00:11.967731    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 20:00:11.967731    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 20:00:11.967731    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 20:00:12 GMT
	I0507 20:00:11.967731    5068 round_trippers.go:580]     Audit-Id: 90b60fd2-e35d-45ea-b017-952514ea9299
	I0507 20:00:11.967861    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000-m03","uid":"aba77896-4856-4a21-886e-724f9e5e85e9","resourceVersion":"2230","creationTimestamp":"2024-05-07T20:00:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_07T20_00_07_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-07T20:00:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3764 chars]
	I0507 20:00:11.968477    5068 node_ready.go:49] node "multinode-600000-m03" has status "Ready":"True"
	I0507 20:00:11.968477    5068 node_ready.go:38] duration metric: took 4.0055739s for node "multinode-600000-m03" to be "Ready" ...
	I0507 20:00:11.968603    5068 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0507 20:00:11.968722    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods
	I0507 20:00:11.968722    5068 round_trippers.go:469] Request Headers:
	I0507 20:00:11.968722    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 20:00:11.968722    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 20:00:11.971931    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 20:00:11.972878    5068 round_trippers.go:577] Response Headers:
	I0507 20:00:11.972915    5068 round_trippers.go:580]     Audit-Id: a734ae06-eba6-412f-83bd-b5bbd5597441
	I0507 20:00:11.972915    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 20:00:11.972915    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 20:00:11.972915    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 20:00:11.972915    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 20:00:11.972915    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 20:00:12 GMT
	I0507 20:00:11.975013    5068 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"2230"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"1873","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 85662 chars]
	I0507 20:00:11.978568    5068 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-5j966" in "kube-system" namespace to be "Ready" ...
	I0507 20:00:11.978665    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-5j966
	I0507 20:00:11.978768    5068 round_trippers.go:469] Request Headers:
	I0507 20:00:11.978768    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 20:00:11.978768    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 20:00:11.980921    5068 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 20:00:11.980921    5068 round_trippers.go:577] Response Headers:
	I0507 20:00:11.980921    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 20:00:11.980921    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 20:00:11.980921    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 20:00:11.980921    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 20:00:11.980921    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 20:00:12 GMT
	I0507 20:00:11.980921    5068 round_trippers.go:580]     Audit-Id: cec2643f-a7b9-479b-bb4f-37f626c8fb04
	I0507 20:00:11.981975    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-5j966","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"d067d438-f4af-42e8-930d-3423a3ac211f","resourceVersion":"1873","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba2a2457-6011-4e9c-ac0f-113b52f2e846","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba2a2457-6011-4e9c-ac0f-113b52f2e846\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6788 chars]
	I0507 20:00:11.982575    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 20:00:11.982575    5068 round_trippers.go:469] Request Headers:
	I0507 20:00:11.982575    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 20:00:11.982575    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 20:00:11.985255    5068 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 20:00:11.985255    5068 round_trippers.go:577] Response Headers:
	I0507 20:00:11.985255    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 20:00:11.985255    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 20:00:12 GMT
	I0507 20:00:11.985255    5068 round_trippers.go:580]     Audit-Id: a1d3ad54-fa63-47b2-af81-e5ebe1ef588b
	I0507 20:00:11.985255    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 20:00:11.985615    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 20:00:11.985615    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 20:00:11.985774    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"2220","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0507 20:00:11.986442    5068 pod_ready.go:92] pod "coredns-7db6d8ff4d-5j966" in "kube-system" namespace has status "Ready":"True"
	I0507 20:00:11.986442    5068 pod_ready.go:81] duration metric: took 7.7761ms for pod "coredns-7db6d8ff4d-5j966" in "kube-system" namespace to be "Ready" ...
	I0507 20:00:11.986442    5068 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-600000" in "kube-system" namespace to be "Ready" ...
	I0507 20:00:11.986442    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-600000
	I0507 20:00:11.986442    5068 round_trippers.go:469] Request Headers:
	I0507 20:00:11.986442    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 20:00:11.986442    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 20:00:11.988097    5068 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0507 20:00:11.988097    5068 round_trippers.go:577] Response Headers:
	I0507 20:00:11.988097    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 20:00:11.988097    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 20:00:11.988097    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 20:00:12 GMT
	I0507 20:00:11.988097    5068 round_trippers.go:580]     Audit-Id: 03e3e704-05a6-4066-84e0-5e93dfd5e026
	I0507 20:00:11.988097    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 20:00:11.988097    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 20:00:11.989088    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-600000","namespace":"kube-system","uid":"de6e93ee-7fd0-45cd-82eb-44edd4a2c2e3","resourceVersion":"1798","creationTimestamp":"2024-05-07T19:54:33Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://172.19.135.22:2379","kubernetes.io/config.hash":"1581bf6b00d338797c8fb8b10b74abde","kubernetes.io/config.mirror":"1581bf6b00d338797c8fb8b10b74abde","kubernetes.io/config.seen":"2024-05-07T19:54:28.831640546Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:54:33Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6160 chars]
	I0507 20:00:11.989088    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 20:00:11.989754    5068 round_trippers.go:469] Request Headers:
	I0507 20:00:11.989754    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 20:00:11.989799    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 20:00:11.995403    5068 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 20:00:11.995403    5068 round_trippers.go:577] Response Headers:
	I0507 20:00:11.995403    5068 round_trippers.go:580]     Audit-Id: b410c275-05b8-40e5-852c-062ab0a7e39c
	I0507 20:00:11.995403    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 20:00:11.995403    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 20:00:11.995403    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 20:00:11.995403    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 20:00:11.995403    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 20:00:12 GMT
	I0507 20:00:11.995403    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"2220","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0507 20:00:11.996050    5068 pod_ready.go:92] pod "etcd-multinode-600000" in "kube-system" namespace has status "Ready":"True"
	I0507 20:00:11.996050    5068 pod_ready.go:81] duration metric: took 9.6072ms for pod "etcd-multinode-600000" in "kube-system" namespace to be "Ready" ...
	I0507 20:00:11.996050    5068 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-600000" in "kube-system" namespace to be "Ready" ...
	I0507 20:00:11.996050    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-600000
	I0507 20:00:11.996050    5068 round_trippers.go:469] Request Headers:
	I0507 20:00:11.996050    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 20:00:11.996050    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 20:00:11.999892    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 20:00:11.999892    5068 round_trippers.go:577] Response Headers:
	I0507 20:00:11.999892    5068 round_trippers.go:580]     Audit-Id: c3a09b1d-0170-4db6-923a-1e8e8311bfb0
	I0507 20:00:11.999892    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 20:00:11.999892    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 20:00:11.999892    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 20:00:11.999892    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 20:00:11.999892    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 20:00:12 GMT
	I0507 20:00:12.000165    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-600000","namespace":"kube-system","uid":"4d9ace3f-e061-42ab-bb1d-3dac545f96a9","resourceVersion":"1795","creationTimestamp":"2024-05-07T19:54:35Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"172.19.135.22:8443","kubernetes.io/config.hash":"cd9cba8f94818776ec6d8836322192b3","kubernetes.io/config.mirror":"cd9cba8f94818776ec6d8836322192b3","kubernetes.io/config.seen":"2024-05-07T19:54:28.735132188Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:54:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7695 chars]
	I0507 20:00:12.000754    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 20:00:12.000754    5068 round_trippers.go:469] Request Headers:
	I0507 20:00:12.000754    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 20:00:12.000754    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 20:00:12.002319    5068 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0507 20:00:12.002319    5068 round_trippers.go:577] Response Headers:
	I0507 20:00:12.002319    5068 round_trippers.go:580]     Audit-Id: aa32172c-7254-44d9-92a7-e8f38303c2d3
	I0507 20:00:12.002319    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 20:00:12.002319    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 20:00:12.002319    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 20:00:12.002319    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 20:00:12.002319    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 20:00:12 GMT
	I0507 20:00:12.003309    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"2220","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0507 20:00:12.003309    5068 pod_ready.go:92] pod "kube-apiserver-multinode-600000" in "kube-system" namespace has status "Ready":"True"
	I0507 20:00:12.003309    5068 pod_ready.go:81] duration metric: took 7.2584ms for pod "kube-apiserver-multinode-600000" in "kube-system" namespace to be "Ready" ...
	I0507 20:00:12.003309    5068 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-600000" in "kube-system" namespace to be "Ready" ...
	I0507 20:00:12.003309    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-600000
	I0507 20:00:12.003309    5068 round_trippers.go:469] Request Headers:
	I0507 20:00:12.003309    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 20:00:12.003309    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 20:00:12.006697    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 20:00:12.007086    5068 round_trippers.go:577] Response Headers:
	I0507 20:00:12.007086    5068 round_trippers.go:580]     Audit-Id: d9c68fe4-0132-4ac6-8d03-2229bc279539
	I0507 20:00:12.007086    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 20:00:12.007086    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 20:00:12.007086    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 20:00:12.007086    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 20:00:12.007086    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 20:00:12 GMT
	I0507 20:00:12.007086    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-600000","namespace":"kube-system","uid":"b960b526-da40-480d-9a72-9ab8c7f2989a","resourceVersion":"1797","creationTimestamp":"2024-05-07T19:33:43Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"f5d6aa60dc93b5e562f37ed2236c3022","kubernetes.io/config.mirror":"f5d6aa60dc93b5e562f37ed2236c3022","kubernetes.io/config.seen":"2024-05-07T19:33:37.010155750Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:43Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7470 chars]
	I0507 20:00:12.007664    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 20:00:12.007664    5068 round_trippers.go:469] Request Headers:
	I0507 20:00:12.007664    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 20:00:12.007664    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 20:00:12.011364    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 20:00:12.011439    5068 round_trippers.go:577] Response Headers:
	I0507 20:00:12.011439    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 20:00:12 GMT
	I0507 20:00:12.011439    5068 round_trippers.go:580]     Audit-Id: 964a03b3-d1b4-41d5-8d53-8d24033fab87
	I0507 20:00:12.011439    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 20:00:12.011439    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 20:00:12.011540    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 20:00:12.011540    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 20:00:12.012159    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"2220","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0507 20:00:12.012935    5068 pod_ready.go:92] pod "kube-controller-manager-multinode-600000" in "kube-system" namespace has status "Ready":"True"
	I0507 20:00:12.012935    5068 pod_ready.go:81] duration metric: took 9.6253ms for pod "kube-controller-manager-multinode-600000" in "kube-system" namespace to be "Ready" ...
	I0507 20:00:12.012935    5068 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9fb6t" in "kube-system" namespace to be "Ready" ...
	I0507 20:00:12.165810    5068 request.go:629] Waited for 152.8643ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9fb6t
	I0507 20:00:12.165810    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9fb6t
	I0507 20:00:12.165810    5068 round_trippers.go:469] Request Headers:
	I0507 20:00:12.165810    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 20:00:12.165810    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 20:00:12.171786    5068 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0507 20:00:12.171786    5068 round_trippers.go:577] Response Headers:
	I0507 20:00:12.171786    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 20:00:12.171786    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 20:00:12.171786    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 20:00:12.171786    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 20:00:12 GMT
	I0507 20:00:12.171786    5068 round_trippers.go:580]     Audit-Id: faa25b25-4ece-4f0f-a76d-2249f350e5d5
	I0507 20:00:12.171786    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 20:00:12.172434    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-9fb6t","generateName":"kube-proxy-","namespace":"kube-system","uid":"f91cc93c-cb87-4494-9e11-b3bf74b9311d","resourceVersion":"2040","creationTimestamp":"2024-05-07T19:36:39Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"952e0024-0710-460c-920c-3959ceadbd10","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:36:39Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"952e0024-0710-460c-920c-3959ceadbd10\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5837 chars]
	I0507 20:00:12.366985    5068 request.go:629] Waited for 193.8153ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.135.22:8443/api/v1/nodes/multinode-600000-m02
	I0507 20:00:12.367178    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000-m02
	I0507 20:00:12.367178    5068 round_trippers.go:469] Request Headers:
	I0507 20:00:12.367178    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 20:00:12.367178    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 20:00:12.370525    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 20:00:12.370525    5068 round_trippers.go:577] Response Headers:
	I0507 20:00:12.370525    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 20:00:12.370525    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 20:00:12.370525    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 20:00:12 GMT
	I0507 20:00:12.370631    5068 round_trippers.go:580]     Audit-Id: 769642ab-b031-49e8-ba24-bd0fd0133d25
	I0507 20:00:12.370631    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 20:00:12.370631    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 20:00:12.370774    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000-m02","uid":"ecb65c2c-9ac5-44bc-9509-f0c59100949c","resourceVersion":"2061","creationTimestamp":"2024-05-07T19:57:56Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_07T19_57_56_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:57:56Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3812 chars]
	I0507 20:00:12.370927    5068 pod_ready.go:92] pod "kube-proxy-9fb6t" in "kube-system" namespace has status "Ready":"True"
	I0507 20:00:12.370927    5068 pod_ready.go:81] duration metric: took 357.9677ms for pod "kube-proxy-9fb6t" in "kube-system" namespace to be "Ready" ...
	I0507 20:00:12.370927    5068 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-c9gw5" in "kube-system" namespace to be "Ready" ...
	I0507 20:00:12.569189    5068 request.go:629] Waited for 198.1432ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c9gw5
	I0507 20:00:12.569454    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/kube-proxy-c9gw5
	I0507 20:00:12.569454    5068 round_trippers.go:469] Request Headers:
	I0507 20:00:12.569454    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 20:00:12.569454    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 20:00:12.572044    5068 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 20:00:12.573070    5068 round_trippers.go:577] Response Headers:
	I0507 20:00:12.573070    5068 round_trippers.go:580]     Audit-Id: 7a8d39a7-b4a6-4e70-af67-238f1b890861
	I0507 20:00:12.573070    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 20:00:12.573070    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 20:00:12.573070    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 20:00:12.573070    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 20:00:12.573070    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 20:00:12 GMT
	I0507 20:00:12.573216    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-c9gw5","generateName":"kube-proxy-","namespace":"kube-system","uid":"9a39807c-6243-4aa2-86f4-8626031c80a6","resourceVersion":"1759","creationTimestamp":"2024-05-07T19:33:58Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"952e0024-0710-460c-920c-3959ceadbd10","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:58Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"952e0024-0710-460c-920c-3959ceadbd10\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6034 chars]
	I0507 20:00:12.771979    5068 request.go:629] Waited for 198.0292ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 20:00:12.771979    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 20:00:12.771979    5068 round_trippers.go:469] Request Headers:
	I0507 20:00:12.771979    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 20:00:12.771979    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 20:00:12.775551    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 20:00:12.775551    5068 round_trippers.go:577] Response Headers:
	I0507 20:00:12.775721    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 20:00:12.775721    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 20:00:12.775721    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 20:00:12.775721    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 20:00:13 GMT
	I0507 20:00:12.775721    5068 round_trippers.go:580]     Audit-Id: c8b517ab-30a9-4a98-b973-ffd7c105909d
	I0507 20:00:12.775721    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 20:00:12.775721    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"2220","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0507 20:00:12.776566    5068 pod_ready.go:92] pod "kube-proxy-c9gw5" in "kube-system" namespace has status "Ready":"True"
	I0507 20:00:12.776653    5068 pod_ready.go:81] duration metric: took 405.698ms for pod "kube-proxy-c9gw5" in "kube-system" namespace to be "Ready" ...
	I0507 20:00:12.776653    5068 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-pzn8q" in "kube-system" namespace to be "Ready" ...
	I0507 20:00:12.972276    5068 request.go:629] Waited for 195.5409ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pzn8q
	I0507 20:00:12.972276    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/kube-proxy-pzn8q
	I0507 20:00:12.972539    5068 round_trippers.go:469] Request Headers:
	I0507 20:00:12.972539    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 20:00:12.972539    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 20:00:12.975096    5068 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 20:00:12.975096    5068 round_trippers.go:577] Response Headers:
	I0507 20:00:12.975096    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 20:00:13 GMT
	I0507 20:00:12.975096    5068 round_trippers.go:580]     Audit-Id: 1edac28e-d525-4ef3-9778-c1309e711126
	I0507 20:00:12.975096    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 20:00:12.975096    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 20:00:12.975096    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 20:00:12.975096    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 20:00:12.976310    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-pzn8q","generateName":"kube-proxy-","namespace":"kube-system","uid":"f2506861-1f09-4193-b751-22a685a0b71b","resourceVersion":"2215","creationTimestamp":"2024-05-07T19:40:53Z","labels":{"controller-revision-hash":"79cf874c65","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"952e0024-0710-460c-920c-3959ceadbd10","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:40:53Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"952e0024-0710-460c-920c-3959ceadbd10\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5842 chars]
	I0507 20:00:13.175091    5068 request.go:629] Waited for 198.1896ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.135.22:8443/api/v1/nodes/multinode-600000-m03
	I0507 20:00:13.175414    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000-m03
	I0507 20:00:13.175414    5068 round_trippers.go:469] Request Headers:
	I0507 20:00:13.175414    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 20:00:13.175414    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 20:00:13.177793    5068 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0507 20:00:13.178713    5068 round_trippers.go:577] Response Headers:
	I0507 20:00:13.178713    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 20:00:13 GMT
	I0507 20:00:13.178713    5068 round_trippers.go:580]     Audit-Id: 1ba5dfa7-0a08-4c44-9093-b2e117fb23f1
	I0507 20:00:13.178713    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 20:00:13.178713    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 20:00:13.178713    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 20:00:13.178713    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 20:00:13.179297    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000-m03","uid":"aba77896-4856-4a21-886e-724f9e5e85e9","resourceVersion":"2230","creationTimestamp":"2024-05-07T20:00:07Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000-m03","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_05_07T20_00_07_0700","minikube.k8s.io/version":"v1.33.0"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-05-07T20:00:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3764 chars]
	I0507 20:00:13.179750    5068 pod_ready.go:92] pod "kube-proxy-pzn8q" in "kube-system" namespace has status "Ready":"True"
	I0507 20:00:13.179750    5068 pod_ready.go:81] duration metric: took 403.0691ms for pod "kube-proxy-pzn8q" in "kube-system" namespace to be "Ready" ...
	I0507 20:00:13.179750    5068 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-600000" in "kube-system" namespace to be "Ready" ...
	I0507 20:00:13.379509    5068 request.go:629] Waited for 199.4515ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-600000
	I0507 20:00:13.379592    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-600000
	I0507 20:00:13.379677    5068 round_trippers.go:469] Request Headers:
	I0507 20:00:13.379677    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 20:00:13.379727    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 20:00:13.383036    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 20:00:13.383385    5068 round_trippers.go:577] Response Headers:
	I0507 20:00:13.383385    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 20:00:13 GMT
	I0507 20:00:13.383385    5068 round_trippers.go:580]     Audit-Id: 13c13791-65e5-449a-8dcb-5974a73f0853
	I0507 20:00:13.383385    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 20:00:13.383385    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 20:00:13.383385    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 20:00:13.383385    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 20:00:13.383889    5068 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-600000","namespace":"kube-system","uid":"ec3ac949-cb83-49be-a908-c93e23135ae8","resourceVersion":"1777","creationTimestamp":"2024-05-07T19:33:44Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"7c4ee79f6d4f6adb00b636f817445fef","kubernetes.io/config.mirror":"7c4ee79f6d4f6adb00b636f817445fef","kubernetes.io/config.seen":"2024-05-07T19:33:44.165677427Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-05-07T19:33:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5200 chars]
	I0507 20:00:13.567275    5068 request.go:629] Waited for 182.351ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 20:00:13.567424    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes/multinode-600000
	I0507 20:00:13.567424    5068 round_trippers.go:469] Request Headers:
	I0507 20:00:13.567486    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 20:00:13.567486    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 20:00:13.571152    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 20:00:13.571152    5068 round_trippers.go:577] Response Headers:
	I0507 20:00:13.571152    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 20:00:13.571152    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 20:00:13 GMT
	I0507 20:00:13.571152    5068 round_trippers.go:580]     Audit-Id: 31909aea-74d1-410c-b11e-d43ba1525e30
	I0507 20:00:13.571152    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 20:00:13.571152    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 20:00:13.571152    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 20:00:13.572006    5068 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"2220","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-05-07T19:33:41Z","fieldsType":"FieldsV1","f [truncated 5238 chars]
	I0507 20:00:13.572785    5068 pod_ready.go:92] pod "kube-scheduler-multinode-600000" in "kube-system" namespace has status "Ready":"True"
	I0507 20:00:13.572846    5068 pod_ready.go:81] duration metric: took 393.0687ms for pod "kube-scheduler-multinode-600000" in "kube-system" namespace to be "Ready" ...
	I0507 20:00:13.572846    5068 pod_ready.go:38] duration metric: took 1.604132s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0507 20:00:13.572945    5068 system_svc.go:44] waiting for kubelet service to be running ....
	I0507 20:00:13.585427    5068 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0507 20:00:13.610086    5068 system_svc.go:56] duration metric: took 36.5661ms WaitForService to wait for kubelet
	I0507 20:00:13.610121    5068 kubeadm.go:576] duration metric: took 5.8960648s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0507 20:00:13.610121    5068 node_conditions.go:102] verifying NodePressure condition ...
	I0507 20:00:13.769571    5068 request.go:629] Waited for 159.3467ms due to client-side throttling, not priority and fairness, request: GET:https://172.19.135.22:8443/api/v1/nodes
	I0507 20:00:13.769571    5068 round_trippers.go:463] GET https://172.19.135.22:8443/api/v1/nodes
	I0507 20:00:13.769571    5068 round_trippers.go:469] Request Headers:
	I0507 20:00:13.769571    5068 round_trippers.go:473]     Accept: application/json, */*
	I0507 20:00:13.769571    5068 round_trippers.go:473]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0507 20:00:13.773432    5068 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0507 20:00:13.773432    5068 round_trippers.go:577] Response Headers:
	I0507 20:00:13.773432    5068 round_trippers.go:580]     Audit-Id: 0e0222ce-f1b4-4f95-9aae-235cab28de1f
	I0507 20:00:13.773515    5068 round_trippers.go:580]     Cache-Control: no-cache, private
	I0507 20:00:13.773515    5068 round_trippers.go:580]     Content-Type: application/json
	I0507 20:00:13.773515    5068 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 203f9372-f3cb-4ec1-9c6c-3b05b3f09162
	I0507 20:00:13.773515    5068 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 829a8bb6-a0be-4b21-9f1a-b8396fd2684d
	I0507 20:00:13.773515    5068 round_trippers.go:580]     Date: Tue, 07 May 2024 20:00:14 GMT
	I0507 20:00:13.773877    5068 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"2231"},"items":[{"metadata":{"name":"multinode-600000","uid":"5945dd51-a68d-4b57-948c-b6106950d500","resourceVersion":"2220","creationTimestamp":"2024-05-07T19:33:41Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-600000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"a2bee053733709aad5480b65159f65519e411d9f","minikube.k8s.io/name":"multinode-600000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_05_07T19_33_45_0700","minikube.k8s.io/version":"v1.33.0","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 14852 chars]
	I0507 20:00:13.774861    5068 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0507 20:00:13.774861    5068 node_conditions.go:123] node cpu capacity is 2
	I0507 20:00:13.774861    5068 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0507 20:00:13.774939    5068 node_conditions.go:123] node cpu capacity is 2
	I0507 20:00:13.774939    5068 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0507 20:00:13.774939    5068 node_conditions.go:123] node cpu capacity is 2
	I0507 20:00:13.774939    5068 node_conditions.go:105] duration metric: took 164.8066ms to run NodePressure ...
	I0507 20:00:13.774939    5068 start.go:240] waiting for startup goroutines ...
	I0507 20:00:13.775002    5068 start.go:254] writing updated cluster config ...
	I0507 20:00:13.787074    5068 ssh_runner.go:195] Run: rm -f paused
	I0507 20:00:13.905753    5068 start.go:600] kubectl: 1.30.0, cluster: 1.30.0 (minor skew: 0)
	I0507 20:00:13.909457    5068 out.go:177] * Done! kubectl is now configured to use "multinode-600000" cluster and "default" namespace by default
	
	
	==> Docker <==
	May 07 19:55:41 multinode-600000 dockerd[1047]: 2024/05/07 19:55:41 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 07 19:55:44 multinode-600000 dockerd[1047]: 2024/05/07 19:55:44 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 07 19:55:44 multinode-600000 dockerd[1047]: 2024/05/07 19:55:44 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 07 19:55:44 multinode-600000 dockerd[1047]: 2024/05/07 19:55:44 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 07 19:55:44 multinode-600000 dockerd[1047]: 2024/05/07 19:55:44 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 07 19:55:44 multinode-600000 dockerd[1047]: 2024/05/07 19:55:44 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 07 19:55:44 multinode-600000 dockerd[1047]: 2024/05/07 19:55:44 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 07 19:55:44 multinode-600000 dockerd[1047]: 2024/05/07 19:55:44 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 07 19:55:44 multinode-600000 dockerd[1047]: 2024/05/07 19:55:44 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 07 19:55:44 multinode-600000 dockerd[1047]: 2024/05/07 19:55:44 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 07 19:55:44 multinode-600000 dockerd[1047]: 2024/05/07 19:55:44 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 07 19:55:45 multinode-600000 dockerd[1047]: 2024/05/07 19:55:45 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 07 19:55:45 multinode-600000 dockerd[1047]: 2024/05/07 19:55:45 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 07 19:55:48 multinode-600000 dockerd[1047]: 2024/05/07 19:55:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 07 19:55:48 multinode-600000 dockerd[1047]: 2024/05/07 19:55:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 07 19:55:48 multinode-600000 dockerd[1047]: 2024/05/07 19:55:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 07 19:55:48 multinode-600000 dockerd[1047]: 2024/05/07 19:55:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 07 19:55:48 multinode-600000 dockerd[1047]: 2024/05/07 19:55:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 07 19:55:48 multinode-600000 dockerd[1047]: 2024/05/07 19:55:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 07 19:55:48 multinode-600000 dockerd[1047]: 2024/05/07 19:55:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 07 19:55:48 multinode-600000 dockerd[1047]: 2024/05/07 19:55:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 07 19:55:48 multinode-600000 dockerd[1047]: 2024/05/07 19:55:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 07 19:55:48 multinode-600000 dockerd[1047]: 2024/05/07 19:55:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 07 19:55:48 multinode-600000 dockerd[1047]: 2024/05/07 19:55:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	May 07 19:55:48 multinode-600000 dockerd[1047]: 2024/05/07 19:55:48 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	78ecb8cdfd06c       8c811b4aec35f                                                                                         4 minutes ago       Running             busybox                   1                   f8dc35309168f       busybox-fc5497c4f-gcqlv
	d27627c198085       cbb01a7bd410d                                                                                         4 minutes ago       Running             coredns                   1                   56c438bec1777       coredns-7db6d8ff4d-5j966
	4c93a69b2eee4       6e38f40d628db                                                                                         5 minutes ago       Running             storage-provisioner       2                   09d2fda974adf       storage-provisioner
	29b5cae0b8f14       4950bb10b3f87                                                                                         6 minutes ago       Running             kindnet-cni               1                   857f6b5630910       kindnet-zw4r9
	5255a972ff6ce       a0bf559e280cf                                                                                         6 minutes ago       Running             kube-proxy                1                   deb171c003562       kube-proxy-c9gw5
	d1e3e4629bc4a       6e38f40d628db                                                                                         6 minutes ago       Exited              storage-provisioner       1                   09d2fda974adf       storage-provisioner
	7c95e3addc4b8       c42f13656d0b2                                                                                         6 minutes ago       Running             kube-apiserver            0                   fec63580ff266       kube-apiserver-multinode-600000
	ac320a872e77c       3861cfcd7c04c                                                                                         6 minutes ago       Running             etcd                      0                   c666fba0d0753       etcd-multinode-600000
	922d1e2b87454       c7aad43836fa5                                                                                         6 minutes ago       Running             kube-controller-manager   1                   5c37290307d14       kube-controller-manager-multinode-600000
	45341720d5be3       259c8277fcbbc                                                                                         6 minutes ago       Running             kube-scheduler            1                   89c8a2313bcaf       kube-scheduler-multinode-600000
	66301c2be7060       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   23 minutes ago      Exited              busybox                   0                   4afb10dc8b115       busybox-fc5497c4f-gcqlv
	9550b237d8d7b       cbb01a7bd410d                                                                                         26 minutes ago      Exited              coredns                   0                   99af61c6e282a       coredns-7db6d8ff4d-5j966
	2d49ad078ed35       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              26 minutes ago      Exited              kindnet-cni               0                   58ebd877d77fb       kindnet-zw4r9
	aa9692c1fbd3b       a0bf559e280cf                                                                                         26 minutes ago      Exited              kube-proxy                0                   70cff02905e8f       kube-proxy-c9gw5
	7cefdac2050fa       259c8277fcbbc                                                                                         26 minutes ago      Exited              kube-scheduler            0                   75f27faec2ed6       kube-scheduler-multinode-600000
	3067f16e2e380       c7aad43836fa5                                                                                         26 minutes ago      Exited              kube-controller-manager   0                   af16a92d7c1cc       kube-controller-manager-multinode-600000
	
	
	==> coredns [9550b237d8d7] <==
	[INFO] 10.244.0.3:47331 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000061304s
	[INFO] 10.244.0.3:36195 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000211814s
	[INFO] 10.244.0.3:37240 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.00014531s
	[INFO] 10.244.0.3:56992 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.00014411s
	[INFO] 10.244.0.3:53922 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000127508s
	[INFO] 10.244.0.3:51034 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000225815s
	[INFO] 10.244.0.3:45123 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000130808s
	[INFO] 10.244.1.2:53185 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000190512s
	[INFO] 10.244.1.2:47331 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000056804s
	[INFO] 10.244.1.2:42551 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000058104s
	[INFO] 10.244.1.2:47860 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000057104s
	[INFO] 10.244.0.3:53037 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000190312s
	[INFO] 10.244.0.3:60613 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000143109s
	[INFO] 10.244.0.3:33867 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000069105s
	[INFO] 10.244.0.3:40289 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00014191s
	[INFO] 10.244.1.2:55673 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000204514s
	[INFO] 10.244.1.2:46474 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000132609s
	[INFO] 10.244.1.2:48070 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000170211s
	[INFO] 10.244.1.2:56147 - 5 "PTR IN 1.128.19.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000093806s
	[INFO] 10.244.0.3:39426 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000107507s
	[INFO] 10.244.0.3:42569 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000295619s
	[INFO] 10.244.0.3:56970 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000267917s
	[INFO] 10.244.0.3:55625 - 5 "PTR IN 1.128.19.172.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.00014751s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [d27627c19808] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = a3820eb745a9a768a035bf81145ae0754aeb40457ffd5109db8c64dac842ada6c2edf6f9e6a410714e0f5cbc9cd90cb925a2fb37599adf58a40dc1bc5fa339b9
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:50649 - 62527 "HINFO IN 8322179340745765625.4555534598598098973. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.052335947s
	
	
	==> describe nodes <==
	Name:               multinode-600000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-600000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a2bee053733709aad5480b65159f65519e411d9f
	                    minikube.k8s.io/name=multinode-600000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_05_07T19_33_45_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 07 May 2024 19:33:41 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-600000
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 07 May 2024 20:00:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 07 May 2024 20:00:09 +0000   Tue, 07 May 2024 19:33:40 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 07 May 2024 20:00:09 +0000   Tue, 07 May 2024 19:33:40 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 07 May 2024 20:00:09 +0000   Tue, 07 May 2024 19:33:40 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 07 May 2024 20:00:09 +0000   Tue, 07 May 2024 19:55:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.135.22
	  Hostname:    multinode-600000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 fa6f1530e0ab4546b96ea753f13add46
	  System UUID:                f3433f71-57fc-a747-9f8d-4f98c0c4b458
	  Boot ID:                    93b81312-340b-4997-83aa-cdf61cfe3dbf
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-gcqlv                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 coredns-7db6d8ff4d-5j966                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     26m
	  kube-system                 etcd-multinode-600000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m4s
	  kube-system                 kindnet-zw4r9                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      26m
	  kube-system                 kube-apiserver-multinode-600000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m2s
	  kube-system                 kube-controller-manager-multinode-600000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-proxy-c9gw5                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 kube-scheduler-multinode-600000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 26m                  kube-proxy       
	  Normal  Starting                 6m1s                 kube-proxy       
	  Normal  NodeHasSufficientMemory  27m (x8 over 27m)    kubelet          Node multinode-600000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m (x8 over 27m)    kubelet          Node multinode-600000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27m (x7 over 27m)    kubelet          Node multinode-600000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    26m                  kubelet          Node multinode-600000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  26m                  kubelet          Node multinode-600000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     26m                  kubelet          Node multinode-600000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  26m                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 26m                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           26m                  node-controller  Node multinode-600000 event: Registered Node multinode-600000 in Controller
	  Normal  NodeReady                26m                  kubelet          Node multinode-600000 status is now: NodeReady
	  Normal  Starting                 6m9s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     6m9s (x7 over 6m9s)  kubelet          Node multinode-600000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m8s (x8 over 6m9s)  kubelet          Node multinode-600000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m8s (x8 over 6m9s)  kubelet          Node multinode-600000 status is now: NodeHasNoDiskPressure
	  Normal  RegisteredNode           5m51s                node-controller  Node multinode-600000 event: Registered Node multinode-600000 in Controller
	
	
	Name:               multinode-600000-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-600000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a2bee053733709aad5480b65159f65519e411d9f
	                    minikube.k8s.io/name=multinode-600000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_07T19_57_56_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 07 May 2024 19:57:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-600000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 07 May 2024 20:00:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 07 May 2024 19:58:03 +0000   Tue, 07 May 2024 19:57:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 07 May 2024 19:58:03 +0000   Tue, 07 May 2024 19:57:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 07 May 2024 19:58:03 +0000   Tue, 07 May 2024 19:57:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 07 May 2024 19:58:03 +0000   Tue, 07 May 2024 19:58:03 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.128.95
	  Hostname:    multinode-600000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 6bc58fef8f8c49ae981de588e6e1d976
	  System UUID:                7ed694c3-4cb4-954c-b244-d0ff36461420
	  Boot ID:                    5a2abe84-adae-43d8-9bef-b0fa4e1e21c5
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-w78sl    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m55s
	  kube-system                 kindnet-jmlw2              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      23m
	  kube-system                 kube-proxy-9fb6t           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m38s                  kube-proxy       
	  Normal  Starting                 23m                    kube-proxy       
	  Normal  NodeHasSufficientMemory  23m (x2 over 23m)      kubelet          Node multinode-600000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23m (x2 over 23m)      kubelet          Node multinode-600000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23m (x2 over 23m)      kubelet          Node multinode-600000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                23m                    kubelet          Node multinode-600000-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  2m41s (x2 over 2m41s)  kubelet          Node multinode-600000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m41s (x2 over 2m41s)  kubelet          Node multinode-600000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m41s (x2 over 2m41s)  kubelet          Node multinode-600000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m41s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m41s                  node-controller  Node multinode-600000-m02 event: Registered Node multinode-600000-m02 in Controller
	  Normal  NodeReady                2m34s                  kubelet          Node multinode-600000-m02 status is now: NodeReady
	
	
	Name:               multinode-600000-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-600000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=a2bee053733709aad5480b65159f65519e411d9f
	                    minikube.k8s.io/name=multinode-600000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_05_07T20_00_07_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Tue, 07 May 2024 20:00:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-600000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Tue, 07 May 2024 20:00:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Tue, 07 May 2024 20:00:12 +0000   Tue, 07 May 2024 20:00:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Tue, 07 May 2024 20:00:12 +0000   Tue, 07 May 2024 20:00:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Tue, 07 May 2024 20:00:12 +0000   Tue, 07 May 2024 20:00:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Tue, 07 May 2024 20:00:12 +0000   Tue, 07 May 2024 20:00:12 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.19.142.217
	  Hostname:    multinode-600000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 277f5f975b624401be66f6b065ec882d
	  System UUID:                ed9d4a55-0088-004e-addb-543af9e02720
	  Boot ID:                    12c5b6f1-bf42-42c2-8c9b-03d3b2e5d4b3
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-dkxzt       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      19m
	  kube-system                 kube-proxy-pzn8q    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 10m                kube-proxy       
	  Normal  Starting                 19m                kube-proxy       
	  Normal  Starting                 27s                kube-proxy       
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  19m (x2 over 19m)  kubelet          Node multinode-600000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x2 over 19m)  kubelet          Node multinode-600000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x2 over 19m)  kubelet          Node multinode-600000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeReady                19m                kubelet          Node multinode-600000-m03 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    10m (x2 over 10m)  kubelet          Node multinode-600000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x2 over 10m)  kubelet          Node multinode-600000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m (x2 over 10m)  kubelet          Node multinode-600000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                10m                kubelet          Node multinode-600000-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  30s (x2 over 30s)  kubelet          Node multinode-600000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s (x2 over 30s)  kubelet          Node multinode-600000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s (x2 over 30s)  kubelet          Node multinode-600000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  30s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           26s                node-controller  Node multinode-600000-m03 event: Registered Node multinode-600000-m03 in Controller
	  Normal  NodeReady                25s                kubelet          Node multinode-600000-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[May 7 19:53] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.293154] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +1.138766] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +7.459478] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +43.605395] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.173535] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[May 7 19:54] systemd-fstab-generator[975]: Ignoring "noauto" option for root device
	[  +0.087049] kauditd_printk_skb: 73 callbacks suppressed
	[  +0.469142] systemd-fstab-generator[1013]: Ignoring "noauto" option for root device
	[  +0.182768] systemd-fstab-generator[1025]: Ignoring "noauto" option for root device
	[  +0.198440] systemd-fstab-generator[1039]: Ignoring "noauto" option for root device
	[  +2.865339] systemd-fstab-generator[1227]: Ignoring "noauto" option for root device
	[  +0.189423] systemd-fstab-generator[1239]: Ignoring "noauto" option for root device
	[  +0.164316] systemd-fstab-generator[1251]: Ignoring "noauto" option for root device
	[  +0.220106] systemd-fstab-generator[1266]: Ignoring "noauto" option for root device
	[  +0.801286] systemd-fstab-generator[1378]: Ignoring "noauto" option for root device
	[  +0.081896] kauditd_printk_skb: 205 callbacks suppressed
	[  +3.512673] systemd-fstab-generator[1519]: Ignoring "noauto" option for root device
	[  +1.511112] kauditd_printk_skb: 64 callbacks suppressed
	[  +5.012853] kauditd_printk_skb: 25 callbacks suppressed
	[  +3.386216] systemd-fstab-generator[2338]: Ignoring "noauto" option for root device
	[  +7.924740] kauditd_printk_skb: 55 callbacks suppressed
	[May 7 20:00] hrtimer: interrupt took 919355 ns
	
	
	==> etcd [ac320a872e77] <==
	{"level":"info","ts":"2024-05-07T19:54:30.71284Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-07T19:54:30.712991Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-05-07T19:54:30.713531Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aac5eb588ad33a11 switched to configuration voters=(12305500322378496529)"}
	{"level":"info","ts":"2024-05-07T19:54:30.713649Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"9263975694bef132","local-member-id":"aac5eb588ad33a11","added-peer-id":"aac5eb588ad33a11","added-peer-peer-urls":["https://172.19.143.74:2380"]}
	{"level":"info","ts":"2024-05-07T19:54:30.714311Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"9263975694bef132","local-member-id":"aac5eb588ad33a11","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-07T19:54:30.714406Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-05-07T19:54:30.727875Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-05-07T19:54:30.733606Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"172.19.135.22:2380"}
	{"level":"info","ts":"2024-05-07T19:54:30.733844Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"172.19.135.22:2380"}
	{"level":"info","ts":"2024-05-07T19:54:30.734234Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"aac5eb588ad33a11","initial-advertise-peer-urls":["https://172.19.135.22:2380"],"listen-peer-urls":["https://172.19.135.22:2380"],"advertise-client-urls":["https://172.19.135.22:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.19.135.22:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-05-07T19:54:30.735199Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-05-07T19:54:32.251434Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aac5eb588ad33a11 is starting a new election at term 2"}
	{"level":"info","ts":"2024-05-07T19:54:32.251481Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aac5eb588ad33a11 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-05-07T19:54:32.251511Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aac5eb588ad33a11 received MsgPreVoteResp from aac5eb588ad33a11 at term 2"}
	{"level":"info","ts":"2024-05-07T19:54:32.251525Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aac5eb588ad33a11 became candidate at term 3"}
	{"level":"info","ts":"2024-05-07T19:54:32.251534Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aac5eb588ad33a11 received MsgVoteResp from aac5eb588ad33a11 at term 3"}
	{"level":"info","ts":"2024-05-07T19:54:32.251556Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aac5eb588ad33a11 became leader at term 3"}
	{"level":"info","ts":"2024-05-07T19:54:32.251563Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aac5eb588ad33a11 elected leader aac5eb588ad33a11 at term 3"}
	{"level":"info","ts":"2024-05-07T19:54:32.258987Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"aac5eb588ad33a11","local-member-attributes":"{Name:multinode-600000 ClientURLs:[https://172.19.135.22:2379]}","request-path":"/0/members/aac5eb588ad33a11/attributes","cluster-id":"9263975694bef132","publish-timeout":"7s"}
	{"level":"info","ts":"2024-05-07T19:54:32.259161Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-07T19:54:32.259624Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-05-07T19:54:32.259711Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-05-07T19:54:32.259193Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-05-07T19:54:32.263273Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.19.135.22:2379"}
	{"level":"info","ts":"2024-05-07T19:54:32.265301Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 20:00:37 up 7 min,  0 users,  load average: 0.41, 0.26, 0.12
	Linux multinode-600000 5.10.207 #1 SMP Tue Apr 30 22:38:43 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [29b5cae0b8f1] <==
	I0507 19:59:56.382298       1 main.go:223] Handling node with IPs: map[172.19.129.4:{}]
	I0507 19:59:56.382323       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.3.0/24] 
	I0507 20:00:06.396305       1 main.go:223] Handling node with IPs: map[172.19.135.22:{}]
	I0507 20:00:06.396338       1 main.go:227] handling current node
	I0507 20:00:06.396349       1 main.go:223] Handling node with IPs: map[172.19.128.95:{}]
	I0507 20:00:06.396355       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 20:00:16.404121       1 main.go:223] Handling node with IPs: map[172.19.135.22:{}]
	I0507 20:00:16.404227       1 main.go:227] handling current node
	I0507 20:00:16.404240       1 main.go:223] Handling node with IPs: map[172.19.128.95:{}]
	I0507 20:00:16.404250       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 20:00:16.404518       1 main.go:223] Handling node with IPs: map[172.19.142.217:{}]
	I0507 20:00:16.404599       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 20:00:16.404769       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 172.19.142.217 Flags: [] Table: 0} 
	I0507 20:00:26.413534       1 main.go:223] Handling node with IPs: map[172.19.135.22:{}]
	I0507 20:00:26.413636       1 main.go:227] handling current node
	I0507 20:00:26.413649       1 main.go:223] Handling node with IPs: map[172.19.128.95:{}]
	I0507 20:00:26.413657       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 20:00:26.414549       1 main.go:223] Handling node with IPs: map[172.19.142.217:{}]
	I0507 20:00:26.414647       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	I0507 20:00:36.428930       1 main.go:223] Handling node with IPs: map[172.19.135.22:{}]
	I0507 20:00:36.429040       1 main.go:227] handling current node
	I0507 20:00:36.429067       1 main.go:223] Handling node with IPs: map[172.19.128.95:{}]
	I0507 20:00:36.429079       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 20:00:36.429390       1 main.go:223] Handling node with IPs: map[172.19.142.217:{}]
	I0507 20:00:36.429416       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kindnet [2d49ad078ed3] <==
	I0507 19:51:27.852540       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.3.0/24] 
	I0507 19:51:37.859761       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:51:37.859857       1 main.go:227] handling current node
	I0507 19:51:37.859871       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:51:37.859930       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:51:37.860319       1 main.go:223] Handling node with IPs: map[172.19.129.4:{}]
	I0507 19:51:37.860413       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.3.0/24] 
	I0507 19:51:47.872402       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:51:47.872506       1 main.go:227] handling current node
	I0507 19:51:47.872520       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:51:47.872528       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:51:47.872641       1 main.go:223] Handling node with IPs: map[172.19.129.4:{}]
	I0507 19:51:47.872692       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.3.0/24] 
	I0507 19:51:57.885508       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:51:57.885541       1 main.go:227] handling current node
	I0507 19:51:57.885551       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:51:57.885556       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:51:57.885664       1 main.go:223] Handling node with IPs: map[172.19.129.4:{}]
	I0507 19:51:57.885730       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.3.0/24] 
	I0507 19:52:07.898773       1 main.go:223] Handling node with IPs: map[172.19.143.74:{}]
	I0507 19:52:07.899054       1 main.go:227] handling current node
	I0507 19:52:07.899142       1 main.go:223] Handling node with IPs: map[172.19.143.144:{}]
	I0507 19:52:07.899258       1 main.go:250] Node multinode-600000-m02 has CIDR [10.244.1.0/24] 
	I0507 19:52:07.899556       1 main.go:223] Handling node with IPs: map[172.19.129.4:{}]
	I0507 19:52:07.899651       1 main.go:250] Node multinode-600000-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [7c95e3addc4b] <==
	I0507 19:54:33.700222       1 shared_informer.go:320] Caches are synced for configmaps
	I0507 19:54:33.702782       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0507 19:54:33.702797       1 policy_source.go:224] refreshing policies
	I0507 19:54:33.720688       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0507 19:54:33.721334       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0507 19:54:33.739066       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0507 19:54:33.741686       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0507 19:54:33.742272       1 aggregator.go:165] initial CRD sync complete...
	I0507 19:54:33.742439       1 autoregister_controller.go:141] Starting autoregister controller
	I0507 19:54:33.742581       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0507 19:54:33.742709       1 cache.go:39] Caches are synced for autoregister controller
	I0507 19:54:33.796399       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0507 19:54:33.800122       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0507 19:54:33.800332       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0507 19:54:33.800503       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0507 19:54:33.825705       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0507 19:54:34.607945       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0507 19:54:35.478370       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.19.135.22]
	I0507 19:54:35.480604       1 controller.go:615] quota admission added evaluator for: endpoints
	I0507 19:54:35.493313       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0507 19:54:36.265995       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0507 19:54:36.444774       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0507 19:54:36.460585       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0507 19:54:36.562263       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0507 19:54:36.572917       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [3067f16e2e38] <==
	I0507 19:34:12.916087       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="21.128233ms"
	I0507 19:34:12.920189       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="131.008µs"
	I0507 19:36:39.748714       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-600000-m02\" does not exist"
	I0507 19:36:39.768095       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-600000-m02" podCIDRs=["10.244.1.0/24"]
	I0507 19:36:42.771386       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-600000-m02"
	I0507 19:36:59.833069       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-600000-m02"
	I0507 19:37:23.261574       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="80.822997ms"
	I0507 19:37:23.275925       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="14.242181ms"
	I0507 19:37:23.277411       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="44.303µs"
	I0507 19:37:25.468822       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="13.984518ms"
	I0507 19:37:25.471412       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="2.381856ms"
	I0507 19:37:26.028543       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.755438ms"
	I0507 19:37:26.029180       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="91.706µs"
	I0507 19:40:53.034791       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-600000-m02"
	I0507 19:40:53.035911       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-600000-m03\" does not exist"
	I0507 19:40:53.048242       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-600000-m03" podCIDRs=["10.244.2.0/24"]
	I0507 19:40:57.837925       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-600000-m03"
	I0507 19:41:13.622605       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-600000-m02"
	I0507 19:48:02.948548       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-600000-m02"
	I0507 19:50:20.695158       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-600000-m02"
	I0507 19:50:25.866050       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-600000-m03\" does not exist"
	I0507 19:50:25.866126       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-600000-m02"
	I0507 19:50:25.887459       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-600000-m03" podCIDRs=["10.244.3.0/24"]
	I0507 19:50:31.631900       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-600000-m02"
	I0507 19:51:58.074557       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-600000-m02"
	
	
	==> kube-controller-manager [922d1e2b8745] <==
	I0507 19:55:38.983177       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="43.503µs"
	I0507 19:55:39.007447       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="15.25642ms"
	I0507 19:55:39.007824       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="337.32µs"
	I0507 19:57:42.747638       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="24.595171ms"
	I0507 19:57:42.748113       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="93.206µs"
	I0507 19:57:42.765908       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.524088ms"
	I0507 19:57:42.782079       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.003358ms"
	I0507 19:57:42.782204       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.502µs"
	I0507 19:57:56.103266       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-600000-m02\" does not exist"
	I0507 19:57:56.121596       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-600000-m02" podCIDRs=["10.244.1.0/24"]
	I0507 19:57:57.051477       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="49.403µs"
	I0507 19:58:03.310314       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-600000-m02"
	I0507 19:58:03.343654       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="60.604µs"
	I0507 19:58:10.092393       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="180.411µs"
	I0507 19:58:10.207611       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="59.004µs"
	I0507 19:58:10.216161       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="41.003µs"
	I0507 19:58:11.056952       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="70.404µs"
	I0507 19:58:11.085391       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="45.003µs"
	I0507 19:58:12.238979       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.236094ms"
	I0507 19:58:12.239350       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="40.002µs"
	I0507 20:00:01.477770       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-600000-m02"
	I0507 20:00:07.319182       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-600000-m03\" does not exist"
	I0507 20:00:07.319912       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-600000-m02"
	I0507 20:00:07.349510       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-600000-m03" podCIDRs=["10.244.2.0/24"]
	I0507 20:00:12.091443       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-600000-m02"
	
	
	==> kube-proxy [5255a972ff6c] <==
	I0507 19:54:35.575583       1 server_linux.go:69] "Using iptables proxy"
	I0507 19:54:35.605564       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.19.135.22"]
	I0507 19:54:35.819515       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0507 19:54:35.819549       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0507 19:54:35.819565       1 server_linux.go:165] "Using iptables Proxier"
	I0507 19:54:35.837879       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0507 19:54:35.838133       1 server.go:872] "Version info" version="v1.30.0"
	I0507 19:54:35.838147       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0507 19:54:35.845888       1 config.go:192] "Starting service config controller"
	I0507 19:54:35.848183       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0507 19:54:35.848226       1 config.go:319] "Starting node config controller"
	I0507 19:54:35.848406       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0507 19:54:35.849079       1 config.go:101] "Starting endpoint slice config controller"
	I0507 19:54:35.849088       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0507 19:54:35.954590       1 shared_informer.go:320] Caches are synced for node config
	I0507 19:54:35.954640       1 shared_informer.go:320] Caches are synced for service config
	I0507 19:54:35.954677       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [aa9692c1fbd3] <==
	I0507 19:33:59.788332       1 server_linux.go:69] "Using iptables proxy"
	I0507 19:33:59.819474       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["172.19.143.74"]
	I0507 19:33:59.872130       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0507 19:33:59.872292       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0507 19:33:59.872320       1 server_linux.go:165] "Using iptables Proxier"
	I0507 19:33:59.878610       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0507 19:33:59.879634       1 server.go:872] "Version info" version="v1.30.0"
	I0507 19:33:59.879774       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0507 19:33:59.883100       1 config.go:192] "Starting service config controller"
	I0507 19:33:59.884238       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0507 19:33:59.884310       1 config.go:101] "Starting endpoint slice config controller"
	I0507 19:33:59.884544       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0507 19:33:59.886801       1 config.go:319] "Starting node config controller"
	I0507 19:33:59.888528       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0507 19:33:59.985346       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0507 19:33:59.985458       1 shared_informer.go:320] Caches are synced for service config
	I0507 19:33:59.988897       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [45341720d5be] <==
	I0507 19:54:30.888703       1 serving.go:380] Generated self-signed cert in-memory
	W0507 19:54:33.652802       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0507 19:54:33.652844       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0507 19:54:33.652885       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0507 19:54:33.652896       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0507 19:54:33.748572       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.0"
	I0507 19:54:33.749371       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0507 19:54:33.757368       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0507 19:54:33.758296       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0507 19:54:33.758449       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0507 19:54:33.759872       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0507 19:54:33.860140       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [7cefdac2050f] <==
	E0507 19:33:42.157128       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0507 19:33:42.162271       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0507 19:33:42.162599       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0507 19:33:42.229371       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0507 19:33:42.229525       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0507 19:33:42.264429       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0507 19:33:42.264596       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0507 19:33:42.284763       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0507 19:33:42.284872       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0507 19:33:42.338396       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0507 19:33:42.338683       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0507 19:33:42.356861       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0507 19:33:42.356964       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0507 19:33:42.435844       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0507 19:33:42.436739       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0507 19:33:42.446379       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0507 19:33:42.446557       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0507 19:33:42.489593       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0507 19:33:42.489896       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0507 19:33:42.647287       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0507 19:33:42.648065       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0507 19:33:42.657928       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0507 19:33:42.658018       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0507 19:33:43.909008       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0507 19:52:16.714078       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	May 07 19:56:28 multinode-600000 kubelet[1526]: E0507 19:56:28.874125    1526 iptables.go:577] "Could not set up iptables canary" err=<
	May 07 19:56:28 multinode-600000 kubelet[1526]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 07 19:56:28 multinode-600000 kubelet[1526]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 07 19:56:28 multinode-600000 kubelet[1526]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 07 19:56:28 multinode-600000 kubelet[1526]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 07 19:57:28 multinode-600000 kubelet[1526]: E0507 19:57:28.872600    1526 iptables.go:577] "Could not set up iptables canary" err=<
	May 07 19:57:28 multinode-600000 kubelet[1526]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 07 19:57:28 multinode-600000 kubelet[1526]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 07 19:57:28 multinode-600000 kubelet[1526]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 07 19:57:28 multinode-600000 kubelet[1526]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 07 19:58:28 multinode-600000 kubelet[1526]: E0507 19:58:28.873819    1526 iptables.go:577] "Could not set up iptables canary" err=<
	May 07 19:58:28 multinode-600000 kubelet[1526]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 07 19:58:28 multinode-600000 kubelet[1526]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 07 19:58:28 multinode-600000 kubelet[1526]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 07 19:58:28 multinode-600000 kubelet[1526]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 07 19:59:28 multinode-600000 kubelet[1526]: E0507 19:59:28.878904    1526 iptables.go:577] "Could not set up iptables canary" err=<
	May 07 19:59:28 multinode-600000 kubelet[1526]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 07 19:59:28 multinode-600000 kubelet[1526]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 07 19:59:28 multinode-600000 kubelet[1526]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 07 19:59:28 multinode-600000 kubelet[1526]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	May 07 20:00:28 multinode-600000 kubelet[1526]: E0507 20:00:28.874373    1526 iptables.go:577] "Could not set up iptables canary" err=<
	May 07 20:00:28 multinode-600000 kubelet[1526]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	May 07 20:00:28 multinode-600000 kubelet[1526]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	May 07 20:00:28 multinode-600000 kubelet[1526]:         Perhaps ip6tables or your kernel needs to be upgraded.
	May 07 20:00:28 multinode-600000 kubelet[1526]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0507 20:00:25.198727   13964 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-600000 -n multinode-600000
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-600000 -n multinode-600000: (11.0040333s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-600000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (592.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (306.26s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-728800 --driver=hyperv
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-728800 --driver=hyperv: exit status 1 (4m59.7311793s)

                                                
                                                
-- stdout --
	* [NoKubernetes-728800] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18804
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting "NoKubernetes-728800" primary control-plane node in "NoKubernetes-728800" cluster
	* Creating hyperv VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...

                                                
                                                
-- /stdout --
** stderr ** 
	W0507 20:16:39.977795    2272 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p NoKubernetes-728800 --driver=hyperv" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-728800 -n NoKubernetes-728800
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-728800 -n NoKubernetes-728800: exit status 7 (6.5319777s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	W0507 20:21:39.711776    7004 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0507 20:21:46.137440    7004 status.go:352] failed to get driver ip: getting IP: IP not found
	E0507 20:21:46.137440    7004 status.go:249] status error: getting IP: IP not found

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-728800" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (306.26s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (33.37s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p pause-774000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Non-zero exit: out/minikube-windows-amd64.exe delete -p pause-774000 --alsologtostderr -v=5: exit status 1 (25.2129692s)

                                                
                                                
-- stdout --
	* Stopping node "pause-774000"  ...
	* Powering off "pause-774000" via SSH ...

                                                
                                                
-- /stdout --
** stderr ** 
	W0507 20:51:43.418308    6796 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0507 20:51:43.502990    6796 out.go:291] Setting OutFile to fd 1620 ...
	I0507 20:51:43.503993    6796 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 20:51:43.503993    6796 out.go:304] Setting ErrFile to fd 2024...
	I0507 20:51:43.503993    6796 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 20:51:43.522000    6796 out.go:298] Setting JSON to false
	I0507 20:51:43.531995    6796 cli_runner.go:164] Run: docker ps -a --filter label=name.minikube.sigs.k8s.io --format {{.Names}}
	I0507 20:51:43.717332    6796 config.go:182] Loaded profile config "auto-808800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 20:51:43.718326    6796 config.go:182] Loaded profile config "calico-808800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 20:51:43.718326    6796 config.go:182] Loaded profile config "custom-flannel-808800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 20:51:43.719335    6796 config.go:182] Loaded profile config "ha-210800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 20:51:43.719335    6796 config.go:182] Loaded profile config "pause-774000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 20:51:43.720338    6796 config.go:182] Loaded profile config "pause-774000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 20:51:43.720338    6796 delete.go:301] DeleteProfiles
	I0507 20:51:43.720338    6796 delete.go:329] Deleting pause-774000
	I0507 20:51:43.720338    6796 delete.go:334] pause-774000 configuration: &{Name:pause-774000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.30.0 ClusterName:pause-774000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.19.135.175 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-d
evice-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0507 20:51:43.720338    6796 config.go:182] Loaded profile config "pause-774000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 20:51:43.721325    6796 config.go:182] Loaded profile config "pause-774000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 20:51:43.723333    6796 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-774000 ).state
	I0507 20:51:46.115560    6796 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 20:51:46.115560    6796 main.go:141] libmachine: [stderr =====>] : 
	I0507 20:51:46.115560    6796 stop.go:39] StopHost: pause-774000
	I0507 20:51:46.118863    6796 out.go:177] * Stopping node "pause-774000"  ...
	I0507 20:51:46.122459    6796 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0507 20:51:46.131686    6796 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0507 20:51:46.131686    6796 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-774000 ).state
	I0507 20:51:48.433415    6796 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 20:51:48.433415    6796 main.go:141] libmachine: [stderr =====>] : 
	I0507 20:51:48.433415    6796 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-774000 ).networkadapters[0]).ipaddresses[0]
	I0507 20:51:51.252425    6796 main.go:141] libmachine: [stdout =====>] : 172.19.135.175
	
	I0507 20:51:51.253111    6796 main.go:141] libmachine: [stderr =====>] : 
	I0507 20:51:51.253427    6796 sshutil.go:53] new ssh client: &{IP:172.19.135.175 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\pause-774000\id_rsa Username:docker}
	I0507 20:51:51.365108    6796 ssh_runner.go:235] Completed: sudo mkdir -p /var/lib/minikube/backup: (5.2330611s)
	I0507 20:51:51.373638    6796 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0507 20:51:51.456488    6796 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0507 20:51:51.527781    6796 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-774000 ).state
	I0507 20:51:53.857734    6796 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 20:51:53.857826    6796 main.go:141] libmachine: [stderr =====>] : 
	W0507 20:51:53.857826    6796 register.go:133] "PowerOff" was not found within the registered steps for "Deleting": [Deleting Stopping Done Puring home dir]
	I0507 20:51:53.861325    6796 out.go:177] * Powering off "pause-774000" via SSH ...
	I0507 20:51:53.865667    6796 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-774000 ).state
	I0507 20:51:56.241394    6796 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 20:51:56.241394    6796 main.go:141] libmachine: [stderr =====>] : 
	I0507 20:51:56.241466    6796 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-774000 ).networkadapters[0]).ipaddresses[0]
	I0507 20:51:58.980840    6796 main.go:141] libmachine: [stdout =====>] : 172.19.135.175
	
	I0507 20:51:58.980919    6796 main.go:141] libmachine: [stderr =====>] : 
	I0507 20:51:58.984648    6796 main.go:141] libmachine: Using SSH client type: native
	I0507 20:51:58.984751    6796 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x11aa1c0] 0x11acda0 <nil>  [] 0s} 172.19.135.175 22 <nil> <nil>}
	I0507 20:51:58.984751    6796 main.go:141] libmachine: About to run SSH command:
	sudo poweroff
	I0507 20:51:59.168918    6796 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0507 20:51:59.168918    6796 stop.go:100] poweroff result: out=, err=<nil>
	I0507 20:51:59.168918    6796 main.go:141] libmachine: Stopping "pause-774000"...
	I0507 20:51:59.168918    6796 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-774000 ).state
	I0507 20:52:02.495055    6796 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 20:52:02.495983    6796 main.go:141] libmachine: [stderr =====>] : 
	I0507 20:52:02.495983    6796 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Stop-VM pause-774000

                                                
                                                
** /stderr **
pause_test.go:134: failed to delete minikube with args: "out/minikube-windows-amd64.exe delete -p pause-774000 --alsologtostderr -v=5" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-774000 -n pause-774000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-774000 -n pause-774000: exit status 7 (5.5556316s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	W0507 20:52:08.638697    4596 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	E0507 20:52:14.048367    4596 status.go:352] failed to get driver ip: getting IP: Host is not running
	E0507 20:52:14.048367    4596 status.go:249] status error: getting IP: Host is not running

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-774000" host is not running, skipping log retrieval (state="Error")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-774000 -n pause-774000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-774000 -n pause-774000: exit status 7 (2.6021912s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0507 20:52:14.178471   13704 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "pause-774000" host is not running, skipping log retrieval (state="Stopped")
--- FAIL: TestPause/serial/DeletePaused (33.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (10800.492s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p flannel-808800 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=hyperv
E0507 20:58:32.161424    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\auto-808800\client.crt: The system cannot find the path specified.
panic: test timed out after 3h0m0s
running tests:
	TestNetworkPlugins (27m9s)
	TestNetworkPlugins/group/custom-flannel (10m7s)
	TestNetworkPlugins/group/false (6m1s)
	TestNetworkPlugins/group/flannel (12s)
	TestNetworkPlugins/group/flannel/Start (12s)
	TestNetworkPlugins/group/kindnet (4m0s)
	TestNetworkPlugins/group/kindnet/Start (4m0s)
	TestStartStop (21m25s)

                                                
                                                
goroutine 2810 [running]:
testing.(*M).startAlarm.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 10 minutes]:
testing.tRunner.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc000226ea0, 0xc00120fbb0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1695 +0x134
testing.runTests(0xc000a0a270, {0x494d540, 0x2a, 0x2a}, {0x261852b?, 0x45806f?, 0x4970760?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc000a497c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc000a497c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0x195

                                                
                                                
goroutine 6 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc0004af000)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 124 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 123
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2601 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001222600)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2623
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 122 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc000a826d0, 0x3c)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0x20b4be0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0002215c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000a82700)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0001911e0, {0x3582340, 0xc000833c20}, 0x1, 0xc000740060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0001911e0, 0x3b9aca00, 0x0, 0x1, 0xc000740060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 147
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 51 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1174 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 50
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1170 +0x171

                                                
                                                
goroutine 862 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc0019c0550, 0x36)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0x20b4be0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001572a20)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0019c0580)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001950c30, {0x3582340, 0xc001301e00}, 0x1, 0xc000740060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001950c30, 0x3b9aca00, 0x0, 0x1, 0xc000740060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 811
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 810 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001572b40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 803
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2408 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc0008b3090, 0x1)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0x20b4be0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001acfc80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0008b3140)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000754070, {0x3582340, 0xc0019305d0}, 0x1, 0xc000740060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc000754070, 0x3b9aca00, 0x0, 0x1, 0xc000740060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2462
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 123 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x35a5d40, 0xc000740060}, 0xc001227f50, 0xc001227f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x35a5d40, 0xc000740060}, 0xa0?, 0xc001227f50, 0xc001227f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x35a5d40?, 0xc000740060?}, 0x0?, 0x4e7c60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001227fd0?, 0x52e404?, 0xc000734340?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 147
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2187 [chan receive, 28 minutes]:
testing.(*testContext).waitParallel(0xc000a088c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc001235040)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001235040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc001235040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc001235040, 0xc0004ae400)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2184
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 863 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x35a5d40, 0xc000740060}, 0xc001303f50, 0xc001303f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x35a5d40, 0xc000740060}, 0x90?, 0xc001303f50, 0xc001303f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x35a5d40?, 0xc000740060?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001303fd0?, 0x52e404?, 0xc001284c80?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 811
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2659 [select, 5 minutes]:
os/exec.(*Cmd).watchCtx(0xc0017dec60, 0xc0012a8c00)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2560
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 683 [IO wait, 163 minutes]:
internal/poll.runtime_pollWait(0x27af278de08, 0x72)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc00046dc08?, 0x0?, 0x0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.execIO(0xc0008371a0, 0xc0003e7bb0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:175 +0xe6
internal/poll.(*FD).acceptOne(0xc000837188, 0x344, {0xc0007425a0?, 0x0?, 0x0?}, 0xc00046d808?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:944 +0x67
internal/poll.(*FD).Accept(0xc000837188, 0xc0003e7d90)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:978 +0x1bc
net.(*netFD).accept(0xc000837188)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/fd_windows.go:178 +0x54
net.(*TCPListener).accept(0xc001a1e180)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc001a1e180)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc000a3e0f0, {0x3598de0, 0xc001a1e180})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/server.go:3255 +0x33e
net/http.(*Server).ListenAndServe(0xc000a3e0f0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/server.go:3184 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xd?, 0xc000578680)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2209 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 638
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2208 +0x129

                                                
                                                
goroutine 2797 [syscall, locked to thread]:
syscall.SyscallN(0x0?, {0xc0016f5b20?, 0x3b7ea5?, 0x49fdbc0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x3a2c77?, 0xc0016f5b80?, 0x3afdd6?, 0x49fdbc0?, 0xc0016f5c08?, 0x3a2985?, 0x27aecd30a28?, 0xc001536077?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x3a8, {0xc0018ef11a?, 0xee6, 0x45417f?}, 0x0?, 0x800000?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:442
syscall.Read(0xc0017ec288?, {0xc0018ef11a?, 0x3dc1be?, 0x4000?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc0017ec288, {0xc0018ef11a, 0xee6, 0xee6})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc0000a6c00, {0xc0018ef11a?, 0xc0016f5d98?, 0x2000?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00136a1b0, {0x3580f00, 0xc00038f3a0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3581040, 0xc00136a1b0}, {0x3580f00, 0xc00038f3a0}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3581040, 0xc00136a1b0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4901840?, {0x3581040?, 0xc00136a1b0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x49
io.copyBuffer({0x3581040, 0xc00136a1b0}, {0x3580fc0, 0xc0000a6c00}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0x302b028?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2827
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2373 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x35a5d40, 0xc000740060}, 0xc001225f50, 0xc001225f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x35a5d40, 0xc000740060}, 0x90?, 0xc001225f50, 0xc001225f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x35a5d40?, 0xc000740060?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc001225fd0?, 0x52e404?, 0xc00192c4e0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2360
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 146 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0002216e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 134
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 147 [chan receive, 173 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000a82700, 0xc000740060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 134
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 932 [chan send, 148 minutes]:
os/exec.(*Cmd).watchCtx(0xc001666160, 0xc001664240)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 822
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 1149 [chan send, 151 minutes]:
os/exec.(*Cmd).watchCtx(0xc00162e160, 0xc001664c60)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1148
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 2754 [chan receive, 3 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001a9c980, 0xc000740060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2703
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2602 [chan receive, 5 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000205dc0, 0xc000740060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2623
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2586 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2585
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2584 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc000205d90, 0x0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0x20b4be0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001222420)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000205dc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00232c270, {0x3582340, 0xc001538000}, 0x1, 0xc000740060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00232c270, 0x3b9aca00, 0x0, 0x1, 0xc000740060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2602
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2742 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2741
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2462 [chan receive, 8 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0008b3140, 0xc000740060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2460
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2191 [syscall, locked to thread]:
syscall.SyscallN(0x7ffdf6a24de0?, {0xc0015ad108?, 0x3?, 0x0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x0?, 0x1?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x624, 0xffffffff)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc00178e240)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0017df1e0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc0017df1e0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:607 +0x2d
os/exec.(*Cmd).CombinedOutput(0xc0017df1e0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:1012 +0x85
k8s.io/minikube/test/integration.debugLogs(0xc001235ba0, {0xc00165c6d0, 0xc})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:414 +0x3de5
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc001235ba0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:211 +0xbcc
testing.tRunner(0xc001235ba0, 0xc0004ae600)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2184
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2410 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2409
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2673 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001239c80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2703
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2307 [chan receive, 22 minutes]:
testing.(*testContext).waitParallel(0xc000a088c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000579d40)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000579d40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc000579d40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc000579d40, 0xc000a0c540)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2253
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2741 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x35a5d40, 0xc000740060}, 0xc0014a5f50, 0xc0014a5f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x35a5d40, 0xc000740060}, 0x90?, 0xc0014a5f50, 0xc0014a5f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x35a5d40?, 0xc000740060?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0014a5fd0?, 0x52e404?, 0xc000a1ac60?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2754
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2253 [chan receive, 22 minutes]:
testing.tRunner.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc000579040, 0x302b258)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 2106
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2192 [syscall, locked to thread]:
syscall.SyscallN(0x7ffdf6a24de0?, {0xc0015a9108?, 0x3?, 0x0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x0?, 0x1?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x6b0, 0xffffffff)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc00178e7e0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0017df340)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc0017df340)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:607 +0x2d
os/exec.(*Cmd).CombinedOutput(0xc0017df340)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:1012 +0x85
k8s.io/minikube/test/integration.debugLogs(0xc001235d40, {0xc00169a498, 0x15})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:602 +0xa1e5
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc001235d40)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:211 +0xbcc
testing.tRunner(0xc001235d40, 0xc0004ae680)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2184
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2255 [chan receive, 22 minutes]:
testing.(*testContext).waitParallel(0xc000a088c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000579520)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000579520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc000579520)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc000579520, 0xc000a0c100)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2253
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2184 [chan receive, 5 minutes]:
testing.tRunner.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc001234820, 0xc0016961b0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 2025
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2190 [chan receive, 5 minutes]:
testing.(*T).Run(0xc001235a00, {0x25bc9f6?, 0x357aed8?}, 0xc0018a8d50)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc001235a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:111 +0x5de
testing.tRunner(0xc001235a00, 0xc0004ae580)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2184
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2409 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x35a5d40, 0xc000740060}, 0xc0012ebf50, 0xc0012ebf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x35a5d40, 0xc000740060}, 0x90?, 0xc0012ebf50, 0xc0012ebf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x35a5d40?, 0xc000740060?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0012ebfd0?, 0x52e404?, 0xc000a4eab0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2462
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2254 [chan receive, 22 minutes]:
testing.(*testContext).waitParallel(0xc000a088c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000579380)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000579380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc000579380)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc000579380, 0xc000a0c080)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2253
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 864 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 863
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2306 [chan receive, 22 minutes]:
testing.(*testContext).waitParallel(0xc000a088c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000579a00)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000579a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc000579a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc000579a00, 0xc000a0c280)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2253
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 811 [chan receive, 154 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0019c0580, 0xc000740060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 803
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2461 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001acfda0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2460
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2798 [select]:
os/exec.(*Cmd).watchCtx(0xc0017df080, 0xc0012a9440)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 2827
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x9f3

                                                
                                                
goroutine 2186 [chan receive, 28 minutes]:
testing.(*testContext).waitParallel(0xc000a088c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc001234d00)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc001234d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc001234d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc001234d00, 0xc0004ae300)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2184
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2188 [chan receive, 28 minutes]:
testing.(*testContext).waitParallel(0xc000a088c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0012356c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0012356c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc0012356c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:106 +0x334
testing.tRunner(0xc0012356c0, 0xc0004ae480)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2184
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2025 [chan receive, 28 minutes]:
testing.(*T).Run(0xc001234000, {0x25bc9f1?, 0x40f48d?}, 0xc0016961b0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc001234000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc001234000, 0x302b038)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2796 [syscall, locked to thread]:
syscall.SyscallN(0xc00129ea00?, {0xc000a6fb20?, 0x3b7ea5?, 0x49fdbc0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x54d?, 0xc000a6fb80?, 0x3afdd6?, 0x49fdbc0?, 0xc000a6fc08?, 0x3a2985?, 0x27aecd30a28?, 0x4d?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x734, {0xc0018f022e?, 0x5d2, 0x45417f?}, 0x0?, 0x800000?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:442
syscall.Read(0xc0013a9908?, {0xc0018f022e?, 0x3dc1be?, 0x800?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc0013a9908, {0xc0018f022e, 0x5d2, 0x5d2})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc0000a6bb8, {0xc0018f022e?, 0xc00120ae00?, 0x22d?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00136a180, {0x3580f00, 0xc00050ab00})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3581040, 0xc00136a180}, {0x3580f00, 0xc00050ab00}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0xc000a6fe78?, {0x3581040, 0xc00136a180})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4901840?, {0x3581040?, 0xc00136a180?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x49
io.copyBuffer({0x3581040, 0xc00136a180}, {0x3580fc0, 0xc0000a6bb8}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc001d342a0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2827
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2372 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc0008b2390, 0xf)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0x20b4be0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001aceb40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0008b2500)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0013a3d50, {0x3582340, 0xc00169d560}, 0x1, 0xc000740060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0013a3d50, 0x3b9aca00, 0x0, 0x1, 0xc000740060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2360
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2561 [syscall, locked to thread]:
syscall.SyscallN(0x0?, {0xc0017abb20?, 0x5f8?, 0x8?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x0?, 0xffc?, 0x27af2797520?, 0x40?, 0xc0017abc08?, 0x3a281b?, 0x4000?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x6f8, {0xc001c0db02?, 0x4fe, 0x45417f?}, 0x0?, 0x800000?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:442
syscall.Read(0xc0017ec008?, {0xc001c0db02?, 0xc0017abc70?, 0x800?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc0017ec008, {0xc001c0db02, 0x4fe, 0x4fe})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc0000a6a70, {0xc001c0db02?, 0x9cba85?, 0x22d?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0018a8e10, {0x3580f00, 0xc00038f480})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3581040, 0xc0018a8e10}, {0x3580f00, 0xc00038f480}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0xc0009a8680?, {0x3581040, 0xc0018a8e10})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4901840?, {0x3581040?, 0xc0018a8e10?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x49
io.copyBuffer({0x3581040, 0xc0018a8e10}, {0x3580fc0, 0xc0000a6a70}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0004b8380?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2560
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2257 [chan receive, 22 minutes]:
testing.(*testContext).waitParallel(0xc000a088c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc000579860)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc000579860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc000579860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc000579860, 0xc000a0c240)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2253
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2256 [chan receive, 22 minutes]:
testing.(*testContext).waitParallel(0xc000a088c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.(*T).Parallel(0xc0005796c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1484 +0x229
k8s.io/minikube/test/integration.MaybeParallel(0xc0005796c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0005796c0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:94 +0x45
testing.tRunner(0xc0005796c0, 0xc000a0c200)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2253
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2759 [IO wait]:
internal/poll.runtime_pollWait(0x27af278df00, 0x72)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0x915b44bcb69ffc0?, 0x6f2b7f362905a60b?, 0x0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.execIO(0xc001654020, 0x302bc30)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:175 +0xe6
internal/poll.(*FD).Read(0xc001654008, {0xc001458000, 0x2000, 0x2000})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:436 +0x2b1
net.(*netFD).Read(0xc001654008, {0xc001458000?, 0xc0004c43c0?, 0x2?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/fd_posix.go:55 +0x25
net.(*conn).Read(0xc0000a6af0, {0xc001458000?, 0xc001458005?, 0x22?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/net.go:179 +0x45
crypto/tls.(*atLeastReader).Read(0xc0015fc7b0, {0xc001458000?, 0x0?, 0xc0015fc7b0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/crypto/tls/conn.go:806 +0x3b
bytes.(*Buffer).ReadFrom(0xc00160e630, {0x3582aa0, 0xc0015fc7b0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
crypto/tls.(*Conn).readFromUntil(0xc00160e388, {0x27af293a798, 0xc0015fd248}, 0xc00131d980?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/crypto/tls/conn.go:828 +0xde
crypto/tls.(*Conn).readRecordOrCCS(0xc00160e388, 0x0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/crypto/tls/conn.go:626 +0x3cf
crypto/tls.(*Conn).readRecord(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/crypto/tls/conn.go:588
crypto/tls.(*Conn).Read(0xc00160e388, {0xc001670000, 0x1000, 0x3d7a49?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/crypto/tls/conn.go:1370 +0x156
bufio.(*Reader).Read(0xc001572300, {0xc0012d4c80, 0x9, 0xc00131dd18?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bufio/bufio.go:241 +0x197
io.ReadAtLeast({0x35810e0, 0xc001572300}, {0xc0012d4c80, 0x9, 0x9}, 0x9)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:335 +0x90
io.ReadFull(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:354
golang.org/x/net/http2.readFrameHeader({0xc0012d4c80, 0x9, 0x940345?}, {0x35810e0?, 0xc001572300?})
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.24.0/http2/frame.go:237 +0x65
golang.org/x/net/http2.(*Framer).ReadFrame(0xc0012d4c40)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.24.0/http2/frame.go:498 +0x85
golang.org/x/net/http2.(*clientConnReadLoop).run(0xc00131dfa8)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.24.0/http2/transport.go:2429 +0xd8
golang.org/x/net/http2.(*ClientConn).readLoop(0xc000189200)
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.24.0/http2/transport.go:2325 +0x65
created by golang.org/x/net/http2.(*ClientConn).goRun in goroutine 2758
	/var/lib/jenkins/go/pkg/mod/golang.org/x/net@v0.24.0/http2/transport.go:369 +0x2d

                                                
                                                
goroutine 2106 [chan receive, 22 minutes]:
testing.(*T).Run(0xc001234ea0, {0x25bc9f1?, 0x4e7333?}, 0x302b258)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc001234ea0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc001234ea0, 0x302b080)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2560 [syscall, 5 minutes, locked to thread]:
syscall.SyscallN(0x7ffdf6a24de0?, {0xc001307bd0?, 0x3?, 0x0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x7fc, 0xffffffff)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc0017ea750)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0017dec60)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc0017dec60)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc0013ba000, 0xc0017dec60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.1(0xc0013ba000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:112 +0x52
testing.tRunner(0xc0013ba000, 0xc0018a8d50)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2190
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2374 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2373
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2189 [chan receive]:
testing.(*T).Run(0xc001235860, {0x25bc9f6?, 0x357aed8?}, 0xc0016f9350)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc001235860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:111 +0x5de
testing.tRunner(0xc001235860, 0xc0004ae500)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2184
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2658 [syscall, locked to thread]:
syscall.SyscallN(0x35763a72656e6f69?, {0xc001d31b20?, 0xa2d2d2074756f64?, 0x43951cdc3d4a6808?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0x27af2942140?, 0xc001d31b80?, 0x3afdd6?, 0x49fdbc0?, 0xc001d31c08?, 0x3a2985?, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x678, {0xc001a38715?, 0x78eb, 0x45417f?}, 0x0?, 0x800000?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:442
syscall.Read(0xc0017ec508?, {0xc001a38715?, 0x3dc171?, 0x20000?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc0017ec508, {0xc001a38715, 0x78eb, 0x78eb})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc0000a6aa0, {0xc001a38715?, 0xc001d31d98?, 0xfef6?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0018a8e40, {0x3580f00, 0xc00050af38})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3581040, 0xc0018a8e40}, {0x3580f00, 0xc00050af38}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3581040, 0xc0018a8e40})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4901840?, {0x3581040?, 0xc0018a8e40?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x49
io.copyBuffer({0x3581040, 0xc0018a8e40}, {0x3580fc0, 0xc0000a6aa0}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc001d31fa8?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2560
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2827 [syscall, locked to thread]:
syscall.SyscallN(0x7ffdf6a24de0?, {0xc0017a7bd0?, 0x3?, 0x0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall(0x3?, 0x3?, 0x1?, 0x2?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:482 +0x35
syscall.WaitForSingleObject(0x778, 0xffffffff)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1142 +0x5d
os.(*Process).wait(0xc00071dc80)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec_windows.go:18 +0x50
os.(*Process).Wait(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc0017df080)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc0017df080)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc000a2d860, 0xc0017df080)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.1(0xc000a2d860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:112 +0x52
testing.tRunner(0xc000a2d860, 0xc0016f9350)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2189
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2740 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc001a9c950, 0x0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x15d
sync.(*Cond).Wait(0x20b4be0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001239b60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001a9c980)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0019504e0, {0x3582340, 0xc00136acc0}, 0x1, 0xc000740060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0019504e0, 0x3b9aca00, 0x0, 0x1, 0xc000740060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2754
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2799 [syscall, locked to thread]:
syscall.SyscallN(0x0?, {0xc001d2db20?, 0x3b7ea5?, 0x49fdbc0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc001d2db67?, 0xc001d2db80?, 0x3afdd6?, 0x49fdbc0?, 0xc001d2dc08?, 0x3a281b?, 0x398ba6?, 0x67?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x730, {0xc00167e53a?, 0x2c6, 0xc00167e400?}, 0x0?, 0x800000?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:442
syscall.Read(0xc0017ecc88?, {0xc00167e53a?, 0x3dc1be?, 0x400?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc0017ecc88, {0xc00167e53a, 0x2c6, 0x2c6})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc0000a6c50, {0xc00167e53a?, 0xc001d2dd98?, 0x13a?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00136a1e0, {0x3580f00, 0xc00038f3a8})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3581040, 0xc00136a1e0}, {0x3580f00, 0xc00038f3a8}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3581040, 0xc00136a1e0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4901840?, {0x3581040?, 0xc00136a1e0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x49
io.copyBuffer({0x3581040, 0xc00136a1e0}, {0x3580fc0, 0xc0000a6c50}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc000a1ad80?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2191
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2359 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001acede0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2355
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2360 [chan receive, 12 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0008b2500, 0xc000740060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2355
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2800 [syscall, locked to thread]:
syscall.SyscallN(0x0?, {0xc001c1fb20?, 0x3b7ea5?, 0x49fdbc0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:544 +0x107
syscall.Syscall6(0xc001c1fb67?, 0xc001c1fb80?, 0x3afdd6?, 0x49fdbc0?, 0xc001c1fc08?, 0x3a281b?, 0x398ba6?, 0x67?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/syscall_windows.go:488 +0x4a
syscall.readFile(0x2ec, {0xc00158913a?, 0x2c6, 0xc001589000?}, 0x0?, 0x800000?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_windows.go:1021 +0x8b
syscall.ReadFile(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:442
syscall.Read(0xc0017ed408?, {0xc00158913a?, 0x3dc1be?, 0x400?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_windows.go:421 +0x2d
internal/poll.(*FD).Read(0xc0017ed408, {0xc00158913a, 0x2c6, 0x2c6})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_windows.go:422 +0x1c5
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc0000a6ca0, {0xc00158913a?, 0xc001c1fd98?, 0x13a?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc00136a240, {0x3580f00, 0xc00038f3b0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3581040, 0xc00136a240}, {0x3580f00, 0xc00038f3b0}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3581040, 0xc00136a240})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x4901840?, {0x3581040?, 0xc00136a240?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x49
io.copyBuffer({0x3581040, 0xc00136a240}, {0x3580fc0, 0xc0000a6ca0}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0x302b060?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2192
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0xa2b

                                                
                                                
goroutine 2585 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x35a5d40, 0xc000740060}, 0xc0009bff50, 0xc0009bff98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x35a5d40, 0xc000740060}, 0x60?, 0xc0009bff50, 0xc0009bff98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x35a5d40?, 0xc000740060?}, 0xc0009bffb0?, 0x936448?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x52e3a5?, 0xc001407a20?, 0xc00192d860?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2602
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                    

Test pass (165/209)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 16.93
4 TestDownloadOnly/v1.20.0/preload-exists 0.06
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.27
9 TestDownloadOnly/v1.20.0/DeleteAll 1.25
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 1.24
12 TestDownloadOnly/v1.30.0/json-events 10.09
13 TestDownloadOnly/v1.30.0/preload-exists 0
16 TestDownloadOnly/v1.30.0/kubectl 0
17 TestDownloadOnly/v1.30.0/LogsDuration 0.46
18 TestDownloadOnly/v1.30.0/DeleteAll 1.03
19 TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds 1.15
21 TestBinaryMirror 6.69
22 TestOffline 398.66
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.24
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.23
27 TestAddons/Setup 368.57
30 TestAddons/parallel/Ingress 59.65
31 TestAddons/parallel/InspektorGadget 24.17
32 TestAddons/parallel/MetricsServer 20.7
33 TestAddons/parallel/HelmTiller 26.92
35 TestAddons/parallel/CSI 100.12
36 TestAddons/parallel/Headlamp 33.14
37 TestAddons/parallel/CloudSpanner 18.99
38 TestAddons/parallel/LocalPath 81.9
39 TestAddons/parallel/NvidiaDevicePlugin 18.76
40 TestAddons/parallel/Yakd 6.01
43 TestAddons/serial/GCPAuth/Namespaces 0.28
44 TestAddons/StoppedEnableDisable 50.09
45 TestCertOptions 505.9
46 TestCertExpiration 928.98
47 TestDockerFlags 458.98
48 TestForceSystemdFlag 485.91
49 TestForceSystemdEnv 330.33
56 TestErrorSpam/start 15.39
57 TestErrorSpam/status 33.03
58 TestErrorSpam/pause 20.25
59 TestErrorSpam/unpause 20.45
60 TestErrorSpam/stop 52.02
63 TestFunctional/serial/CopySyncFile 0.03
64 TestFunctional/serial/StartWithProxy 188.78
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 117.04
67 TestFunctional/serial/KubeContext 0.12
68 TestFunctional/serial/KubectlGetPods 0.21
71 TestFunctional/serial/CacheCmd/cache/add_remote 23.86
72 TestFunctional/serial/CacheCmd/cache/add_local 9.8
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.2
74 TestFunctional/serial/CacheCmd/cache/list 0.21
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 8.45
76 TestFunctional/serial/CacheCmd/cache/cache_reload 32.59
77 TestFunctional/serial/CacheCmd/cache/delete 0.43
78 TestFunctional/serial/MinikubeKubectlCmd 0.44
80 TestFunctional/serial/ExtraConfig 118.5
81 TestFunctional/serial/ComponentHealth 0.15
82 TestFunctional/serial/LogsCmd 7.66
83 TestFunctional/serial/LogsFileCmd 9.51
84 TestFunctional/serial/InvalidService 19.58
90 TestFunctional/parallel/StatusCmd 36.92
94 TestFunctional/parallel/ServiceCmdConnect 28.05
95 TestFunctional/parallel/AddonsCmd 0.57
96 TestFunctional/parallel/PersistentVolumeClaim 38.51
98 TestFunctional/parallel/SSHCmd 18.07
99 TestFunctional/parallel/CpCmd 53.48
100 TestFunctional/parallel/MySQL 56.2
101 TestFunctional/parallel/FileSync 8.97
102 TestFunctional/parallel/CertSync 55.49
106 TestFunctional/parallel/NodeLabels 0.17
108 TestFunctional/parallel/NonActiveRuntimeDisabled 9.22
110 TestFunctional/parallel/License 2.49
111 TestFunctional/parallel/ServiceCmd/DeployApp 16.39
112 TestFunctional/parallel/ProfileCmd/profile_not_create 10.15
113 TestFunctional/parallel/ProfileCmd/profile_list 9.89
114 TestFunctional/parallel/ServiceCmd/List 12.51
115 TestFunctional/parallel/ProfileCmd/profile_json_output 10.12
116 TestFunctional/parallel/ServiceCmd/JSONOutput 12.4
118 TestFunctional/parallel/DockerEnv/powershell 40.48
120 TestFunctional/parallel/Version/short 0.2
121 TestFunctional/parallel/Version/components 7.31
122 TestFunctional/parallel/ImageCommands/ImageListShort 6.95
123 TestFunctional/parallel/ImageCommands/ImageListTable 6.71
124 TestFunctional/parallel/ImageCommands/ImageListJson 6.87
125 TestFunctional/parallel/ImageCommands/ImageListYaml 6.95
126 TestFunctional/parallel/ImageCommands/ImageBuild 24.78
127 TestFunctional/parallel/ImageCommands/Setup 4.05
128 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 22.08
130 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 17.98
131 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 23.49
132 TestFunctional/parallel/UpdateContextCmd/no_changes 2.81
133 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 2.32
134 TestFunctional/parallel/UpdateContextCmd/no_clusters 2.3
135 TestFunctional/parallel/ImageCommands/ImageSaveToFile 8.62
137 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 8.04
138 TestFunctional/parallel/ImageCommands/ImageRemove 15.06
139 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
141 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 13.51
142 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 15.73
148 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
149 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 10.72
150 TestFunctional/delete_addon-resizer_images 0.4
151 TestFunctional/delete_my-image_image 0.15
152 TestFunctional/delete_minikube_cached_images 0.15
156 TestMultiControlPlane/serial/StartCluster 648.28
157 TestMultiControlPlane/serial/DeployApp 10.94
159 TestMultiControlPlane/serial/AddWorkerNode 228.47
160 TestMultiControlPlane/serial/NodeLabels 0.17
161 TestMultiControlPlane/serial/HAppyAfterClusterStart 25.42
162 TestMultiControlPlane/serial/CopyFile 560.01
163 TestMultiControlPlane/serial/StopSecondaryNode 67.02
164 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 18.86
168 TestImageBuild/serial/Setup 184.46
169 TestImageBuild/serial/NormalBuild 8.69
170 TestImageBuild/serial/BuildWithBuildArg 7.97
171 TestImageBuild/serial/BuildWithDockerIgnore 6.93
172 TestImageBuild/serial/BuildWithSpecifiedDockerfile 6.78
176 TestJSONOutput/start/Command 224.86
177 TestJSONOutput/start/Audit 0
179 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
180 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
182 TestJSONOutput/pause/Command 6.93
183 TestJSONOutput/pause/Audit 0
185 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
186 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
188 TestJSONOutput/unpause/Command 6.97
189 TestJSONOutput/unpause/Audit 0
191 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
192 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
194 TestJSONOutput/stop/Command 32.7
195 TestJSONOutput/stop/Audit 0
197 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
198 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
199 TestErrorJSONOutput 1.2
204 TestMainNoArgs 0.17
205 TestMinikubeProfile 483.64
208 TestMountStart/serial/StartWithMountFirst 138.14
209 TestMountStart/serial/VerifyMountFirst 8.63
210 TestMountStart/serial/StartWithMountSecond 137.73
211 TestMountStart/serial/VerifyMountSecond 8.41
212 TestMountStart/serial/DeleteFirst 25.13
213 TestMountStart/serial/VerifyMountPostDelete 8.42
214 TestMountStart/serial/Stop 27.37
215 TestMountStart/serial/RestartStopped 105.96
216 TestMountStart/serial/VerifyMountPostStop 8.68
219 TestMultiNode/serial/FreshStart2Nodes 386.58
220 TestMultiNode/serial/DeployApp2Nodes 8.21
222 TestMultiNode/serial/AddNode 204.21
223 TestMultiNode/serial/MultiNodeLabels 0.15
224 TestMultiNode/serial/ProfileList 10.33
225 TestMultiNode/serial/CopyFile 318.07
226 TestMultiNode/serial/StopNode 68.59
227 TestMultiNode/serial/StartAfterStop 160.94
232 TestPreload 485.47
233 TestScheduledStopWindows 305.68
238 TestRunningBinaryUpgrade 1017.65
240 TestKubernetesUpgrade 977.41
243 TestNoKubernetes/serial/StartNoK8sWithVersion 0.31
245 TestStoppedBinaryUpgrade/Setup 0.69
246 TestStoppedBinaryUpgrade/Upgrade 718.04
258 TestStoppedBinaryUpgrade/MinikubeLogs 8.2
267 TestPause/serial/Start 467.12
270 TestPause/serial/SecondStartNoReconfiguration 368.7
279 TestPause/serial/Pause 9.04
281 TestPause/serial/VerifyStatus 12.73
285 TestPause/serial/Unpause 8.29
286 TestPause/serial/PauseAgain 8.93
x
+
TestDownloadOnly/v1.20.0/json-events (16.93s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-702700 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-702700 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv: (16.9315041s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (16.93s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.27s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-702700
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-702700: exit status 85 (269.8296ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-702700 | minikube5\jenkins | v1.33.0 | 07 May 24 17:58 UTC |          |
	|         | -p download-only-702700        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=hyperv                |                      |                   |         |                     |          |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/07 17:58:32
	Running on machine: minikube5
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0507 17:58:32.548178    3140 out.go:291] Setting OutFile to fd 636 ...
	I0507 17:58:32.549196    3140 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 17:58:32.549196    3140 out.go:304] Setting ErrFile to fd 640...
	I0507 17:58:32.549196    3140 out.go:338] TERM=,COLORTERM=, which probably does not support color
	W0507 17:58:32.563592    3140 root.go:314] Error reading config file at C:\Users\jenkins.minikube5\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube5\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I0507 17:58:32.573746    3140 out.go:298] Setting JSON to true
	I0507 17:58:32.576680    3140 start.go:129] hostinfo: {"hostname":"minikube5","uptime":20630,"bootTime":1715084081,"procs":188,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0507 17:58:32.576680    3140 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0507 17:58:32.586122    3140 out.go:97] [download-only-702700] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	W0507 17:58:32.586746    3140 preload.go:294] Failed to list preload files: open C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I0507 17:58:32.586746    3140 notify.go:220] Checking for updates...
	I0507 17:58:32.588396    3140 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0507 17:58:32.591346    3140 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0507 17:58:32.593736    3140 out.go:169] MINIKUBE_LOCATION=18804
	I0507 17:58:32.595953    3140 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0507 17:58:32.600142    3140 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0507 17:58:32.601659    3140 driver.go:392] Setting default libvirt URI to qemu:///system
	I0507 17:58:37.576483    3140 out.go:97] Using the hyperv driver based on user configuration
	I0507 17:58:37.577137    3140 start.go:297] selected driver: hyperv
	I0507 17:58:37.577137    3140 start.go:901] validating driver "hyperv" against <nil>
	I0507 17:58:37.577137    3140 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0507 17:58:37.617944    3140 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0507 17:58:37.619149    3140 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0507 17:58:37.619149    3140 cni.go:84] Creating CNI manager for ""
	I0507 17:58:37.619149    3140 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0507 17:58:37.619149    3140 start.go:340] cluster config:
	{Name:download-only-702700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1714992375-18804@sha256:e2bdbdc6df02839a4c3d52f0fbf3343cbd2bec4f26b90f508a88bbeaee364a04 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-702700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube5:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0507 17:58:37.619149    3140 iso.go:125] acquiring lock: {Name:mk4977609d05da04fcecf95837b3381fb1950afd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0507 17:58:37.621914    3140 out.go:97] Downloading VM boot image ...
	I0507 17:58:37.622832    3140 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18779/minikube-v1.33.0-1714498396-18779-amd64.iso.sha256 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\iso\amd64\minikube-v1.33.0-1714498396-18779-amd64.iso
	I0507 17:58:41.288829    3140 out.go:97] Starting "download-only-702700" primary control-plane node in "download-only-702700" cluster
	I0507 17:58:41.288914    3140 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0507 17:58:41.338671    3140 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0507 17:58:41.339441    3140 cache.go:56] Caching tarball of preloaded images
	I0507 17:58:41.339585    3140 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0507 17:58:41.355768    3140 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0507 17:58:41.355768    3140 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0507 17:58:41.424756    3140 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0507 17:58:45.122551    3140 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0507 17:58:45.123265    3140 preload.go:255] verifying checksum of C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0507 17:58:46.106287    3140 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0507 17:58:46.107300    3140 profile.go:143] Saving config to C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\download-only-702700\config.json ...
	I0507 17:58:46.107619    3140 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\download-only-702700\config.json: {Name:mkcaf58908725c62ac30f8a849d0f0ba214c917d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 17:58:46.108502    3140 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0507 17:58:46.109664    3140 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube5\minikube-integration\.minikube\cache\windows\amd64\v1.20.0/kubectl.exe
	
	
	* The control-plane node download-only-702700 host does not exist
	  To start a cluster, run: "minikube start -p download-only-702700"

                                                
                                                
-- /stdout --
** stderr ** 
	W0507 17:58:49.481532   14216 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.27s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (1.25s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.2484648s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (1.25s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (1.24s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-702700
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-702700: (1.2419701s)
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (1.24s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/json-events (10.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-081300 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-081300 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=docker --driver=hyperv: (10.0858811s)
--- PASS: TestDownloadOnly/v1.30.0/json-events (10.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/preload-exists
--- PASS: TestDownloadOnly/v1.30.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/kubectl
--- PASS: TestDownloadOnly/v1.30.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/LogsDuration (0.46s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-081300
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-081300: exit status 85 (459.0968ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-702700 | minikube5\jenkins | v1.33.0 | 07 May 24 17:58 UTC |                     |
	|         | -p download-only-702700        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	| delete  | --all                          | minikube             | minikube5\jenkins | v1.33.0 | 07 May 24 17:58 UTC | 07 May 24 17:58 UTC |
	| delete  | -p download-only-702700        | download-only-702700 | minikube5\jenkins | v1.33.0 | 07 May 24 17:58 UTC | 07 May 24 17:58 UTC |
	| start   | -o=json --download-only        | download-only-081300 | minikube5\jenkins | v1.33.0 | 07 May 24 17:58 UTC |                     |
	|         | -p download-only-081300        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.30.0   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/05/07 17:58:52
	Running on machine: minikube5
	Binary: Built with gc go1.22.1 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0507 17:58:52.308766   12672 out.go:291] Setting OutFile to fd 744 ...
	I0507 17:58:52.309383   12672 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 17:58:52.309917   12672 out.go:304] Setting ErrFile to fd 728...
	I0507 17:58:52.310257   12672 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 17:58:52.332567   12672 out.go:298] Setting JSON to true
	I0507 17:58:52.335567   12672 start.go:129] hostinfo: {"hostname":"minikube5","uptime":20650,"bootTime":1715084081,"procs":187,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0507 17:58:52.335567   12672 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0507 17:58:52.343564   12672 out.go:97] [download-only-081300] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0507 17:58:52.343564   12672 notify.go:220] Checking for updates...
	I0507 17:58:52.346568   12672 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0507 17:58:52.348568   12672 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0507 17:58:52.350565   12672 out.go:169] MINIKUBE_LOCATION=18804
	I0507 17:58:52.352565   12672 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0507 17:58:52.356565   12672 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0507 17:58:52.357578   12672 driver.go:392] Setting default libvirt URI to qemu:///system
	
	
	* The control-plane node download-only-081300 host does not exist
	  To start a cluster, run: "minikube start -p download-only-081300"

                                                
                                                
-- /stdout --
** stderr ** 
	W0507 17:59:02.328274   14148 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0/LogsDuration (0.46s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAll (1.03s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:197: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.0246414s)
--- PASS: TestDownloadOnly/v1.30.0/DeleteAll (1.03s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (1.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-081300
aaa_download_only_test.go:208: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-only-081300: (1.1475922s)
--- PASS: TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (1.15s)

                                                
                                    
x
+
TestBinaryMirror (6.69s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-347400 --alsologtostderr --binary-mirror http://127.0.0.1:49786 --driver=hyperv
aaa_download_only_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-347400 --alsologtostderr --binary-mirror http://127.0.0.1:49786 --driver=hyperv: (5.9320083s)
helpers_test.go:175: Cleaning up "binary-mirror-347400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-347400
--- PASS: TestBinaryMirror (6.69s)

                                                
                                    
x
+
TestOffline (398.66s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-626900 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-626900 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv: (5m53.3728166s)
helpers_test.go:175: Cleaning up "offline-docker-626900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-626900
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-626900: (45.2855563s)
--- PASS: TestOffline (398.66s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.24s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-809100
addons_test.go:928: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable dashboard -p addons-809100: exit status 85 (241.0306ms)

                                                
                                                
-- stdout --
	* Profile "addons-809100" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-809100"

                                                
                                                
-- /stdout --
** stderr ** 
	W0507 17:59:14.244046    7636 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.24s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.23s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-809100
addons_test.go:939: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons disable dashboard -p addons-809100: exit status 85 (228.79ms)

                                                
                                                
-- stdout --
	* Profile "addons-809100" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-809100"

                                                
                                                
-- /stdout --
** stderr ** 
	W0507 17:59:14.243063    5764 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.23s)

                                                
                                    
x
+
TestAddons/Setup (368.57s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-809100 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-809100 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=hyperv --addons=ingress --addons=ingress-dns --addons=helm-tiller: (6m8.5677689s)
--- PASS: TestAddons/Setup (368.57s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (59.65s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-809100 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-809100 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-809100 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [5ef39ae7-a47e-42e6-9f7b-af67da99f1cd] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [5ef39ae7-a47e-42e6-9f7b-af67da99f1cd] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 13.0122905s
addons_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-809100 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe -p addons-809100 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (8.5600709s)
addons_test.go:269: debug: unexpected stderr for out/minikube-windows-amd64.exe -p addons-809100 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'":
W0507 18:07:03.514470    8004 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
addons_test.go:286: (dbg) Run:  kubectl --context addons-809100 replace --force -f testdata\ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-809100 ip
addons_test.go:291: (dbg) Done: out/minikube-windows-amd64.exe -p addons-809100 ip: (2.1935279s)
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 172.19.135.136
addons_test.go:306: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-809100 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-windows-amd64.exe -p addons-809100 addons disable ingress-dns --alsologtostderr -v=1: (14.1972267s)
addons_test.go:311: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-809100 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-windows-amd64.exe -p addons-809100 addons disable ingress --alsologtostderr -v=1: (20.017211s)
--- PASS: TestAddons/parallel/Ingress (59.65s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (24.17s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-kzdfd" [27a1055a-971e-4151-97f3-b69f110173be] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.0131093s
addons_test.go:841: (dbg) Run:  out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-809100
addons_test.go:841: (dbg) Done: out/minikube-windows-amd64.exe addons disable inspektor-gadget -p addons-809100: (19.1587773s)
--- PASS: TestAddons/parallel/InspektorGadget (24.17s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (20.7s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 21.0804ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-p2jcj" [dc9c7f4c-2494-4d74-b0b8-049eec1fd473] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.0125411s
addons_test.go:415: (dbg) Run:  kubectl --context addons-809100 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-809100 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:432: (dbg) Done: out/minikube-windows-amd64.exe -p addons-809100 addons disable metrics-server --alsologtostderr -v=1: (14.5159772s)
--- PASS: TestAddons/parallel/MetricsServer (20.70s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (26.92s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 6.8641ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-wd6hw" [04e45f7a-31be-4ab5-8750-092a25c0c93e] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.0163621s
addons_test.go:473: (dbg) Run:  kubectl --context addons-809100 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
2024/05/07 18:05:50 [ERR] GET http://172.19.135.136:5000 request failed: Get "http://172.19.135.136:5000": dial tcp 172.19.135.136:5000: connectex: No connection could be made because the target machine actively refused it.
2024/05/07 18:05:50 [DEBUG] GET http://172.19.135.136:5000: retrying in 4s (2 left)
addons_test.go:473: (dbg) Done: kubectl --context addons-809100 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (8.2622819s)
addons_test.go:490: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-809100 addons disable helm-tiller --alsologtostderr -v=1
addons_test.go:490: (dbg) Done: out/minikube-windows-amd64.exe -p addons-809100 addons disable helm-tiller --alsologtostderr -v=1: (13.6248408s)
--- PASS: TestAddons/parallel/HelmTiller (26.92s)

                                                
                                    
x
+
TestAddons/parallel/CSI (100.12s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 44.259ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-809100 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-809100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-809100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-809100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-809100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-809100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-809100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-809100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-809100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-809100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-809100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-809100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-809100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-809100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-809100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-809100 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-809100 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-809100 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [2a76fe6e-56f7-4b09-86fd-1dc82a150532] Pending
helpers_test.go:344: "task-pv-pod" [2a76fe6e-56f7-4b09-86fd-1dc82a150532] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [2a76fe6e-56f7-4b09-86fd-1dc82a150532] Running
2024/05/07 18:05:56 [ERR] GET http://172.19.135.136:5000 request failed: Get "http://172.19.135.136:5000": dial tcp 172.19.135.136:5000: connectex: No connection could be made because the target machine actively refused it.
2024/05/07 18:05:56 [DEBUG] GET http://172.19.135.136:5000: retrying in 8s (1 left)
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 19.0184575s
addons_test.go:584: (dbg) Run:  kubectl --context addons-809100 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-809100 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-809100 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-809100 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-809100 delete pod task-pv-pod: (1.6503475s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-809100 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-809100 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-809100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-809100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-809100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-809100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-809100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-809100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-809100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-809100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-809100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-809100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-809100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-809100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-809100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-809100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-809100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-809100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-809100 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-809100 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [0a059f5f-dfcc-4150-a11f-ed5d7c9e895d] Pending
helpers_test.go:344: "task-pv-pod-restore" [0a059f5f-dfcc-4150-a11f-ed5d7c9e895d] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [0a059f5f-dfcc-4150-a11f-ed5d7c9e895d] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.0123517s
addons_test.go:626: (dbg) Run:  kubectl --context addons-809100 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-809100 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-809100 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-809100 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-windows-amd64.exe -p addons-809100 addons disable csi-hostpath-driver --alsologtostderr -v=1: (20.3690973s)
addons_test.go:642: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-809100 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:642: (dbg) Done: out/minikube-windows-amd64.exe -p addons-809100 addons disable volumesnapshots --alsologtostderr -v=1: (14.4352097s)
--- PASS: TestAddons/parallel/CSI (100.12s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (33.14s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-809100 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-809100 --alsologtostderr -v=1: (14.1245732s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-68456f997b-l75qt" [d9ee4935-c9c3-4dff-894f-bc9d58f0c0ef] Pending
helpers_test.go:344: "headlamp-68456f997b-l75qt" [d9ee4935-c9c3-4dff-894f-bc9d58f0c0ef] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-68456f997b-l75qt" [d9ee4935-c9c3-4dff-894f-bc9d58f0c0ef] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 19.0164998s
--- PASS: TestAddons/parallel/Headlamp (33.14s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (18.99s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-dv82b" [24abdee8-228c-4b45-8e90-c01350082c73] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.0124414s
addons_test.go:860: (dbg) Run:  out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-809100
addons_test.go:860: (dbg) Done: out/minikube-windows-amd64.exe addons disable cloud-spanner -p addons-809100: (13.9692122s)
--- PASS: TestAddons/parallel/CloudSpanner (18.99s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (81.9s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-809100 apply -f testdata\storage-provisioner-rancher\pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-809100 apply -f testdata\storage-provisioner-rancher\pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-809100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-809100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-809100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-809100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-809100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-809100 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-809100 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [2b194019-93ac-44ee-9de6-cac0f6928721] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [2b194019-93ac-44ee-9de6-cac0f6928721] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [2b194019-93ac-44ee-9de6-cac0f6928721] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.014944s
addons_test.go:891: (dbg) Run:  kubectl --context addons-809100 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-809100 ssh "cat /opt/local-path-provisioner/pvc-36cf19a0-0753-4154-9461-803993989c88_default_test-pvc/file1"
addons_test.go:900: (dbg) Done: out/minikube-windows-amd64.exe -p addons-809100 ssh "cat /opt/local-path-provisioner/pvc-36cf19a0-0753-4154-9461-803993989c88_default_test-pvc/file1": (9.1616661s)
addons_test.go:912: (dbg) Run:  kubectl --context addons-809100 delete pod test-local-path
addons_test.go:912: (dbg) Done: kubectl --context addons-809100 delete pod test-local-path: (1.0467639s)
addons_test.go:916: (dbg) Run:  kubectl --context addons-809100 delete pvc test-pvc
2024/05/07 18:05:46 [ERR] GET http://172.19.135.136:5000 request failed: Get "http://172.19.135.136:5000": dial tcp 172.19.135.136:5000: connectex: No connection could be made because the target machine actively refused it.
2024/05/07 18:05:46 [DEBUG] GET http://172.19.135.136:5000: retrying in 2s (3 left)
addons_test.go:920: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-809100 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-windows-amd64.exe -p addons-809100 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (58.3701089s)
--- PASS: TestAddons/parallel/LocalPath (81.90s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (18.76s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-qk4df" [78ef06d5-ca8a-4eff-9c6c-77a168170787] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.0071561s
addons_test.go:955: (dbg) Run:  out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-809100
addons_test.go:955: (dbg) Done: out/minikube-windows-amd64.exe addons disable nvidia-device-plugin -p addons-809100: (13.7475141s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (18.76s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-5ddbf7d777-fdlqx" [fd65cd55-1067-4c1b-b6dd-ab7445fff918] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.0047847s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.28s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-809100 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-809100 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.28s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (50.09s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-809100
addons_test.go:172: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-809100: (38.4808031s)
addons_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-809100
addons_test.go:176: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p addons-809100: (4.5973154s)
addons_test.go:180: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-809100
addons_test.go:180: (dbg) Done: out/minikube-windows-amd64.exe addons disable dashboard -p addons-809100: (4.3973759s)
addons_test.go:185: (dbg) Run:  out/minikube-windows-amd64.exe addons disable gvisor -p addons-809100
addons_test.go:185: (dbg) Done: out/minikube-windows-amd64.exe addons disable gvisor -p addons-809100: (2.6166091s)
--- PASS: TestAddons/StoppedEnableDisable (50.09s)

                                                
                                    
x
+
TestCertOptions (505.9s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-878700 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv
E0507 20:35:23.599412    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\client.crt: The system cannot find the path specified.
cert_options_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-options-878700 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv: (7m28.0357272s)
cert_options_test.go:60: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-878700 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Done: out/minikube-windows-amd64.exe -p cert-options-878700 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": (9.2180369s)
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-878700 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-878700 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Done: out/minikube-windows-amd64.exe ssh -p cert-options-878700 -- "sudo cat /etc/kubernetes/admin.conf": (8.7145464s)
helpers_test.go:175: Cleaning up "cert-options-878700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-878700
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-878700: (39.7955969s)
--- PASS: TestCertOptions (505.90s)

                                                
                                    
x
+
TestCertExpiration (928.98s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-347300 --memory=2048 --cert-expiration=3m --driver=hyperv
cert_options_test.go:123: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-347300 --memory=2048 --cert-expiration=3m --driver=hyperv: (5m25.7023827s)
E0507 20:40:01.624585    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-527400\client.crt: The system cannot find the path specified.
E0507 20:40:06.873573    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\client.crt: The system cannot find the path specified.
cert_options_test.go:131: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-347300 --memory=2048 --cert-expiration=8760h --driver=hyperv
cert_options_test.go:131: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-347300 --memory=2048 --cert-expiration=8760h --driver=hyperv: (6m22.3863205s)
helpers_test.go:175: Cleaning up "cert-expiration-347300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-expiration-347300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-expiration-347300: (40.8813774s)
--- PASS: TestCertExpiration (928.98s)

                                                
                                    
x
+
TestDockerFlags (458.98s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-153400 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv
docker_test.go:51: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-153400 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv: (6m41.3787409s)
docker_test.go:56: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-153400 ssh "sudo systemctl show docker --property=Environment --no-pager"
E0507 20:40:23.622882    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\client.crt: The system cannot find the path specified.
docker_test.go:56: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-153400 ssh "sudo systemctl show docker --property=Environment --no-pager": (8.9007415s)
docker_test.go:67: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-153400 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-153400 ssh "sudo systemctl show docker --property=ExecStart --no-pager": (9.1456805s)
helpers_test.go:175: Cleaning up "docker-flags-153400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-153400
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-153400: (39.5500529s)
--- PASS: TestDockerFlags (458.98s)

                                                
                                    
x
+
TestForceSystemdFlag (485.91s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-867900 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv
E0507 20:23:26.802564    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\client.crt: The system cannot find the path specified.
docker_test.go:91: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-flag-867900 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv: (6m58.5384367s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-867900 ssh "docker info --format {{.CgroupDriver}}"
E0507 20:30:23.581364    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\client.crt: The system cannot find the path specified.
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-flag-867900 ssh "docker info --format {{.CgroupDriver}}": (10.0211045s)
helpers_test.go:175: Cleaning up "force-systemd-flag-867900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-867900
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-867900: (57.3451011s)
--- PASS: TestForceSystemdFlag (485.91s)

                                                
                                    
x
+
TestForceSystemdEnv (330.33s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-277000 --memory=2048 --alsologtostderr -v=5 --driver=hyperv
docker_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-env-277000 --memory=2048 --alsologtostderr -v=5 --driver=hyperv: (4m35.9481517s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-277000 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-env-277000 ssh "docker info --format {{.CgroupDriver}}": (9.1272318s)
helpers_test.go:175: Cleaning up "force-systemd-env-277000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-277000
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-277000: (45.2561662s)
--- PASS: TestForceSystemdEnv (330.33s)

                                                
                                    
x
+
TestErrorSpam/start (15.39s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-751800 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-751800 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-751800 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-751800 start --dry-run: (5.0186651s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-751800 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-751800 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-751800 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-751800 start --dry-run: (5.2290386s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-751800 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-751800 start --dry-run
E0507 18:13:06.986533    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\client.crt: The system cannot find the path specified.
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-751800 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-751800 start --dry-run: (5.1389722s)
--- PASS: TestErrorSpam/start (15.39s)

                                                
                                    
x
+
TestErrorSpam/status (33.03s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-751800 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-751800 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-751800 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-751800 status: (11.3611679s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-751800 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-751800 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-751800 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-751800 status: (10.7477493s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-751800 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-751800 status
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-751800 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-751800 status: (10.9180428s)
--- PASS: TestErrorSpam/status (33.03s)

                                                
                                    
x
+
TestErrorSpam/pause (20.25s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-751800 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-751800 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-751800 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-751800 pause: (6.9244706s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-751800 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-751800 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-751800 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-751800 pause: (6.647975s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-751800 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-751800 pause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-751800 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-751800 pause: (6.6733528s)
--- PASS: TestErrorSpam/pause (20.25s)

                                                
                                    
x
+
TestErrorSpam/unpause (20.45s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-751800 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-751800 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-751800 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-751800 unpause: (6.896s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-751800 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-751800 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-751800 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-751800 unpause: (6.8084232s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-751800 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-751800 unpause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-751800 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-751800 unpause: (6.7416022s)
--- PASS: TestErrorSpam/unpause (20.45s)

                                                
                                    
x
+
TestErrorSpam/stop (52.02s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-751800 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-751800 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-751800 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-751800 stop: (31.7450374s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-751800 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-751800 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-751800 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-751800 stop: (10.2924664s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-751800 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-751800 stop
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-751800 --log_dir C:\Users\jenkins.minikube5\AppData\Local\Temp\nospam-751800 stop: (9.9807213s)
--- PASS: TestErrorSpam/stop (52.02s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: C:\Users\jenkins.minikube5\minikube-integration\.minikube\files\etc\test\nested\copy\9992\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.03s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (188.78s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-527400 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv
E0507 18:15:50.849222    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\client.crt: The system cannot find the path specified.
functional_test.go:2230: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-527400 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv: (3m8.7716306s)
--- PASS: TestFunctional/serial/StartWithProxy (188.78s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (117.04s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-527400 --alsologtostderr -v=8
E0507 18:20:23.054877    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\client.crt: The system cannot find the path specified.
functional_test.go:655: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-527400 --alsologtostderr -v=8: (1m57.0335133s)
functional_test.go:659: soft start took 1m57.0347704s for "functional-527400" cluster.
--- PASS: TestFunctional/serial/SoftStart (117.04s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.12s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-527400 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (23.86s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-527400 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-527400 cache add registry.k8s.io/pause:3.1: (8.0263919s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-527400 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-527400 cache add registry.k8s.io/pause:3.3: (7.9774763s)
functional_test.go:1045: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-527400 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-windows-amd64.exe -p functional-527400 cache add registry.k8s.io/pause:latest: (7.8561504s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (23.86s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (9.8s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-527400 C:\Users\jenkins.minikube5\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local1537907725\001
functional_test.go:1073: (dbg) Done: docker build -t minikube-local-cache-test:functional-527400 C:\Users\jenkins.minikube5\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local1537907725\001: (1.9775955s)
functional_test.go:1085: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-527400 cache add minikube-local-cache-test:functional-527400
functional_test.go:1085: (dbg) Done: out/minikube-windows-amd64.exe -p functional-527400 cache add minikube-local-cache-test:functional-527400: (7.4478601s)
functional_test.go:1090: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-527400 cache delete minikube-local-cache-test:functional-527400
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-527400
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (9.80s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.2s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (8.45s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-527400 ssh sudo crictl images
functional_test.go:1120: (dbg) Done: out/minikube-windows-amd64.exe -p functional-527400 ssh sudo crictl images: (8.4522845s)
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (8.45s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (32.59s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-527400 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1143: (dbg) Done: out/minikube-windows-amd64.exe -p functional-527400 ssh sudo docker rmi registry.k8s.io/pause:latest: (8.4199323s)
functional_test.go:1149: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-527400 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-527400 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (8.5117107s)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	W0507 18:21:30.186787   10516 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-527400 cache reload
functional_test.go:1154: (dbg) Done: out/minikube-windows-amd64.exe -p functional-527400 cache reload: (7.2411248s)
functional_test.go:1159: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-527400 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1159: (dbg) Done: out/minikube-windows-amd64.exe -p functional-527400 ssh sudo crictl inspecti registry.k8s.io/pause:latest: (8.4121017s)
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (32.59s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.43s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.43s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.44s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-527400 kubectl -- --context functional-527400 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.44s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (118.5s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-527400 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-527400 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m58.4981957s)
functional_test.go:757: restart took 1m58.4984807s for "functional-527400" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (118.50s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-527400 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.15s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (7.66s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-527400 logs
functional_test.go:1232: (dbg) Done: out/minikube-windows-amd64.exe -p functional-527400 logs: (7.6640059s)
--- PASS: TestFunctional/serial/LogsCmd (7.66s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (9.51s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-527400 logs --file C:\Users\jenkins.minikube5\AppData\Local\Temp\TestFunctionalserialLogsFileCmd2829424212\001\logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-windows-amd64.exe -p functional-527400 logs --file C:\Users\jenkins.minikube5\AppData\Local\Temp\TestFunctionalserialLogsFileCmd2829424212\001\logs.txt: (9.5029832s)
--- PASS: TestFunctional/serial/LogsFileCmd (9.51s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (19.58s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-527400 apply -f testdata\invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-windows-amd64.exe service invalid-svc -p functional-527400
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-windows-amd64.exe service invalid-svc -p functional-527400: exit status 115 (14.9888985s)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://172.19.129.80:32479 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0507 18:24:44.547629    9084 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube_service_d27a1c5599baa2f8050d003f41b0266333639286_1.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-527400 delete -f testdata\invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-527400 delete -f testdata\invalidsvc.yaml: (1.2320638s)
--- PASS: TestFunctional/serial/InvalidService (19.58s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (36.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-527400 status
functional_test.go:850: (dbg) Done: out/minikube-windows-amd64.exe -p functional-527400 status: (11.9644318s)
functional_test.go:856: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-527400 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:856: (dbg) Done: out/minikube-windows-amd64.exe -p functional-527400 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: (12.5360308s)
functional_test.go:868: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-527400 status -o json
functional_test.go:868: (dbg) Done: out/minikube-windows-amd64.exe -p functional-527400 status -o json: (12.4146365s)
--- PASS: TestFunctional/parallel/StatusCmd (36.92s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (28.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-527400 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-527400 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-vvcxj" [79bbaa79-e55e-4f4c-9902-8f31713b5694] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-vvcxj" [79bbaa79-e55e-4f4c-9902-8f31713b5694] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.0270366s
functional_test.go:1645: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-527400 service hello-node-connect --url
functional_test.go:1645: (dbg) Done: out/minikube-windows-amd64.exe -p functional-527400 service hello-node-connect --url: (16.6678616s)
functional_test.go:1651: found endpoint for hello-node-connect: http://172.19.129.80:30402
functional_test.go:1671: http://172.19.129.80:30402: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-vvcxj

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://172.19.129.80:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=172.19.129.80:30402
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (28.05s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-527400 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-527400 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (38.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [514d12a0-9694-41b7-9ed5-5ae68ad0a037] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.0229761s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-527400 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-527400 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-527400 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-527400 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [38003736-39f1-4967-b7a9-26a580941384] Pending
helpers_test.go:344: "sp-pod" [38003736-39f1-4967-b7a9-26a580941384] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [38003736-39f1-4967-b7a9-26a580941384] Running
E0507 18:26:46.263181    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\client.crt: The system cannot find the path specified.
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 22.0086845s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-527400 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-527400 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-527400 delete -f testdata/storage-provisioner/pod.yaml: (1.472263s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-527400 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [604f6773-8330-4688-bdc3-1a368e8c35e3] Pending
helpers_test.go:344: "sp-pod" [604f6773-8330-4688-bdc3-1a368e8c35e3] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [604f6773-8330-4688-bdc3-1a368e8c35e3] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.0100905s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-527400 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (38.51s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (18.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-527400 ssh "echo hello"
functional_test.go:1721: (dbg) Done: out/minikube-windows-amd64.exe -p functional-527400 ssh "echo hello": (9.0395189s)
functional_test.go:1738: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-527400 ssh "cat /etc/hostname"
functional_test.go:1738: (dbg) Done: out/minikube-windows-amd64.exe -p functional-527400 ssh "cat /etc/hostname": (9.0284449s)
--- PASS: TestFunctional/parallel/SSHCmd (18.07s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (53.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-527400 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-527400 cp testdata\cp-test.txt /home/docker/cp-test.txt: (7.662585s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-527400 ssh -n functional-527400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-527400 ssh -n functional-527400 "sudo cat /home/docker/cp-test.txt": (8.9822829s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-527400 cp functional-527400:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestFunctionalparallelCpCmd2686032904\001\cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-527400 cp functional-527400:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestFunctionalparallelCpCmd2686032904\001\cp-test.txt: (9.5859398s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-527400 ssh -n functional-527400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-527400 ssh -n functional-527400 "sudo cat /home/docker/cp-test.txt": (9.7098674s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-527400 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-527400 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt: (8.1353293s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-527400 ssh -n functional-527400 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-527400 ssh -n functional-527400 "sudo cat /tmp/does/not/exist/cp-test.txt": (9.3917036s)
--- PASS: TestFunctional/parallel/CpCmd (53.48s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (56.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-527400 replace --force -f testdata\mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-p88hx" [3a83b0bd-7425-43d8-8a81-f87026354467] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-p88hx" [3a83b0bd-7425-43d8-8a81-f87026354467] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 43.0139931s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-527400 exec mysql-64454c8b5c-p88hx -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-527400 exec mysql-64454c8b5c-p88hx -- mysql -ppassword -e "show databases;": exit status 1 (260.4032ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-527400 exec mysql-64454c8b5c-p88hx -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-527400 exec mysql-64454c8b5c-p88hx -- mysql -ppassword -e "show databases;": exit status 1 (247.1767ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-527400 exec mysql-64454c8b5c-p88hx -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-527400 exec mysql-64454c8b5c-p88hx -- mysql -ppassword -e "show databases;": exit status 1 (290.2265ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-527400 exec mysql-64454c8b5c-p88hx -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-527400 exec mysql-64454c8b5c-p88hx -- mysql -ppassword -e "show databases;": exit status 1 (265.4714ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-527400 exec mysql-64454c8b5c-p88hx -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-527400 exec mysql-64454c8b5c-p88hx -- mysql -ppassword -e "show databases;": exit status 1 (327.0143ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-527400 exec mysql-64454c8b5c-p88hx -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (56.20s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (8.97s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/9992/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-527400 ssh "sudo cat /etc/test/nested/copy/9992/hosts"
functional_test.go:1927: (dbg) Done: out/minikube-windows-amd64.exe -p functional-527400 ssh "sudo cat /etc/test/nested/copy/9992/hosts": (8.9732369s)
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (8.97s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (55.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/9992.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-527400 ssh "sudo cat /etc/ssl/certs/9992.pem"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-527400 ssh "sudo cat /etc/ssl/certs/9992.pem": (9.0750974s)
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/9992.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-527400 ssh "sudo cat /usr/share/ca-certificates/9992.pem"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-527400 ssh "sudo cat /usr/share/ca-certificates/9992.pem": (9.1134756s)
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-527400 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1969: (dbg) Done: out/minikube-windows-amd64.exe -p functional-527400 ssh "sudo cat /etc/ssl/certs/51391683.0": (10.1362677s)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/99922.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-527400 ssh "sudo cat /etc/ssl/certs/99922.pem"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-527400 ssh "sudo cat /etc/ssl/certs/99922.pem": (9.1251459s)
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/99922.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-527400 ssh "sudo cat /usr/share/ca-certificates/99922.pem"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-527400 ssh "sudo cat /usr/share/ca-certificates/99922.pem": (8.996028s)
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-527400 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:1996: (dbg) Done: out/minikube-windows-amd64.exe -p functional-527400 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": (9.0355186s)
--- PASS: TestFunctional/parallel/CertSync (55.49s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-527400 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (9.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-527400 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-527400 ssh "sudo systemctl is-active crio": exit status 1 (9.2193421s)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	W0507 18:25:54.256446   11700 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (9.22s)

                                                
                                    
x
+
TestFunctional/parallel/License (2.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2284: (dbg) Done: out/minikube-windows-amd64.exe license: (2.4653364s)
--- PASS: TestFunctional/parallel/License (2.49s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (16.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-527400 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-527400 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-rnzxf" [26acb03d-75cc-461e-8a96-0f2a2cb2a846] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-rnzxf" [26acb03d-75cc-461e-8a96-0f2a2cb2a846] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 16.0082143s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (16.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (10.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
functional_test.go:1271: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (9.7450868s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (10.15s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (9.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1306: (dbg) Done: out/minikube-windows-amd64.exe profile list: (9.6863592s)
functional_test.go:1311: Took "9.6877747s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1325: Took "198.48ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (9.89s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (12.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-527400 service list
functional_test.go:1455: (dbg) Done: out/minikube-windows-amd64.exe -p functional-527400 service list: (12.5079762s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (12.51s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (10.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
E0507 18:25:23.068446    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\client.crt: The system cannot find the path specified.
functional_test.go:1357: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (9.8959069s)
functional_test.go:1362: Took "9.8965258s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1375: Took "222.4353ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (10.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (12.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-527400 service list -o json
functional_test.go:1485: (dbg) Done: out/minikube-windows-amd64.exe -p functional-527400 service list -o json: (12.3942601s)
functional_test.go:1490: Took "12.395328s" to run "out/minikube-windows-amd64.exe -p functional-527400 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (12.40s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (40.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:495: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-527400 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-527400"
functional_test.go:495: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-527400 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-527400": (26.4536557s)
functional_test.go:518: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-527400 docker-env | Invoke-Expression ; docker images"
functional_test.go:518: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-527400 docker-env | Invoke-Expression ; docker images": (14.0153764s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (40.48s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-527400 version --short
--- PASS: TestFunctional/parallel/Version/short (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (7.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-527400 version -o=json --components
functional_test.go:2266: (dbg) Done: out/minikube-windows-amd64.exe -p functional-527400 version -o=json --components: (7.3048984s)
--- PASS: TestFunctional/parallel/Version/components (7.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (6.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-527400 image ls --format short --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-527400 image ls --format short --alsologtostderr: (6.9507998s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-527400 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.0
registry.k8s.io/kube-proxy:v1.30.0
registry.k8s.io/kube-controller-manager:v1.30.0
registry.k8s.io/kube-apiserver:v1.30.0
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/google-containers/addon-resizer:functional-527400
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-527400
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-527400 image ls --format short --alsologtostderr:
W0507 18:28:16.181657    3656 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0507 18:28:16.242560    3656 out.go:291] Setting OutFile to fd 936 ...
I0507 18:28:16.242708    3656 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0507 18:28:16.242708    3656 out.go:304] Setting ErrFile to fd 1004...
I0507 18:28:16.242708    3656 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0507 18:28:16.255748    3656 config.go:182] Loaded profile config "functional-527400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0507 18:28:16.255748    3656 config.go:182] Loaded profile config "functional-527400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0507 18:28:16.256738    3656 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-527400 ).state
I0507 18:28:18.316826    3656 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0507 18:28:18.316826    3656 main.go:141] libmachine: [stderr =====>] : 
I0507 18:28:18.326804    3656 ssh_runner.go:195] Run: systemctl --version
I0507 18:28:18.326804    3656 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-527400 ).state
I0507 18:28:20.355938    3656 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0507 18:28:20.356550    3656 main.go:141] libmachine: [stderr =====>] : 
I0507 18:28:20.356550    3656 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-527400 ).networkadapters[0]).ipaddresses[0]
I0507 18:28:22.730489    3656 main.go:141] libmachine: [stdout =====>] : 172.19.129.80

                                                
                                                
I0507 18:28:22.730489    3656 main.go:141] libmachine: [stderr =====>] : 
I0507 18:28:22.730489    3656 sshutil.go:53] new ssh client: &{IP:172.19.129.80 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-527400\id_rsa Username:docker}
I0507 18:28:22.832775    3656 ssh_runner.go:235] Completed: systemctl --version: (4.5056791s)
I0507 18:28:22.839424    3656 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (6.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (6.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-527400 image ls --format table --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-527400 image ls --format table --alsologtostderr: (6.7129839s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-527400 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/minikube-local-cache-test | functional-527400 | 94a208b1f4367 | 30B    |
| registry.k8s.io/etcd                        | 3.5.12-0          | 3861cfcd7c04c | 149MB  |
| gcr.io/k8s-minikube/busybox                 | latest            | beae173ccac6a | 1.24MB |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| docker.io/library/nginx                     | alpine            | 501d84f5d0648 | 48.3MB |
| registry.k8s.io/kube-apiserver              | v1.30.0           | c42f13656d0b2 | 117MB  |
| registry.k8s.io/kube-controller-manager     | v1.30.0           | c7aad43836fa5 | 111MB  |
| gcr.io/google-containers/addon-resizer      | functional-527400 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| docker.io/library/nginx                     | latest            | 1d668e06f1e53 | 188MB  |
| registry.k8s.io/kube-proxy                  | v1.30.0           | a0bf559e280cf | 84.7MB |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| docker.io/localhost/my-image                | functional-527400 | 74e7effc90514 | 1.24MB |
| registry.k8s.io/kube-scheduler              | v1.30.0           | 259c8277fcbbc | 62MB   |
| registry.k8s.io/coredns/coredns             | v1.11.1           | cbb01a7bd410d | 59.8MB |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-527400 image ls --format table --alsologtostderr:
W0507 18:28:36.942814    3176 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0507 18:28:36.998797    3176 out.go:291] Setting OutFile to fd 920 ...
I0507 18:28:36.999814    3176 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0507 18:28:36.999814    3176 out.go:304] Setting ErrFile to fd 936...
I0507 18:28:36.999814    3176 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0507 18:28:37.011809    3176 config.go:182] Loaded profile config "functional-527400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0507 18:28:37.012784    3176 config.go:182] Loaded profile config "functional-527400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0507 18:28:37.012784    3176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-527400 ).state
I0507 18:28:39.069876    3176 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0507 18:28:39.069876    3176 main.go:141] libmachine: [stderr =====>] : 
I0507 18:28:39.078770    3176 ssh_runner.go:195] Run: systemctl --version
I0507 18:28:39.078770    3176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-527400 ).state
I0507 18:28:41.119279    3176 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0507 18:28:41.119279    3176 main.go:141] libmachine: [stderr =====>] : 
I0507 18:28:41.119279    3176 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-527400 ).networkadapters[0]).ipaddresses[0]
I0507 18:28:43.410509    3176 main.go:141] libmachine: [stdout =====>] : 172.19.129.80

                                                
                                                
I0507 18:28:43.410509    3176 main.go:141] libmachine: [stderr =====>] : 
I0507 18:28:43.410509    3176 sshutil.go:53] new ssh client: &{IP:172.19.129.80 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-527400\id_rsa Username:docker}
I0507 18:28:43.504475    3176 ssh_runner.go:235] Completed: systemctl --version: (4.4254182s)
I0507 18:28:43.511053    3176 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (6.71s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (6.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-527400 image ls --format json --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-527400 image ls --format json --alsologtostderr: (6.8714249s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-527400 image ls --format json --alsologtostderr:
[{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"1d668e06f1e534ab338404ba891c37d618dd53c9073dcdd4ebde82aa7643f83f","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.0"],"size":"111000000"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"59800000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-527400"],"size":"32900000"},{"id":"94a208b1f4367adda1da138498ff564fe6b755f8b422865865aa65d6053057ef","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-527400"],"size":"30"},{"id":"501d84
f5d06487ff81e506134dc922ed4fd2080d5521eb5b6ee4054fa17d15c4","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"48300000"},{"id":"c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.0"],"size":"117000000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.0"],"size":"62000000"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"149000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"rep
oTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.30.0"],"size":"84700000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-527400 image ls --format json --alsologtostderr:
W0507 18:28:30.069307    7432 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0507 18:28:30.136853    7432 out.go:291] Setting OutFile to fd 936 ...
I0507 18:28:30.137367    7432 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0507 18:28:30.137428    7432 out.go:304] Setting ErrFile to fd 920...
I0507 18:28:30.137428    7432 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0507 18:28:30.150336    7432 config.go:182] Loaded profile config "functional-527400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0507 18:28:30.150336    7432 config.go:182] Loaded profile config "functional-527400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0507 18:28:30.151338    7432 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-527400 ).state
I0507 18:28:32.213813    7432 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0507 18:28:32.213813    7432 main.go:141] libmachine: [stderr =====>] : 
I0507 18:28:32.222878    7432 ssh_runner.go:195] Run: systemctl --version
I0507 18:28:32.223416    7432 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-527400 ).state
I0507 18:28:34.310016    7432 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0507 18:28:34.310016    7432 main.go:141] libmachine: [stderr =====>] : 
I0507 18:28:34.310016    7432 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-527400 ).networkadapters[0]).ipaddresses[0]
I0507 18:28:36.649296    7432 main.go:141] libmachine: [stdout =====>] : 172.19.129.80

                                                
                                                
I0507 18:28:36.649296    7432 main.go:141] libmachine: [stderr =====>] : 
I0507 18:28:36.649838    7432 sshutil.go:53] new ssh client: &{IP:172.19.129.80 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-527400\id_rsa Username:docker}
I0507 18:28:36.768775    7432 ssh_runner.go:235] Completed: systemctl --version: (4.5456026s)
I0507 18:28:36.774785    7432 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (6.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (6.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-527400 image ls --format yaml --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-527400 image ls --format yaml --alsologtostderr: (6.9452274s)
functional_test.go:265: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-527400 image ls --format yaml --alsologtostderr:
- id: 259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.0
size: "62000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 501d84f5d06487ff81e506134dc922ed4fd2080d5521eb5b6ee4054fa17d15c4
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "48300000"
- id: c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.0
size: "111000000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.0
size: "117000000"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "149000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 94a208b1f4367adda1da138498ff564fe6b755f8b422865865aa65d6053057ef
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-527400
size: "30"
- id: 1d668e06f1e534ab338404ba891c37d618dd53c9073dcdd4ebde82aa7643f83f
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.0
size: "84700000"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-527400
size: "32900000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-527400 image ls --format yaml --alsologtostderr:
W0507 18:28:23.123740    6276 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0507 18:28:23.185184    6276 out.go:291] Setting OutFile to fd 964 ...
I0507 18:28:23.185820    6276 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0507 18:28:23.185864    6276 out.go:304] Setting ErrFile to fd 872...
I0507 18:28:23.185864    6276 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0507 18:28:23.200769    6276 config.go:182] Loaded profile config "functional-527400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0507 18:28:23.200769    6276 config.go:182] Loaded profile config "functional-527400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0507 18:28:23.201907    6276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-527400 ).state
I0507 18:28:25.261782    6276 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0507 18:28:25.261782    6276 main.go:141] libmachine: [stderr =====>] : 
I0507 18:28:25.271908    6276 ssh_runner.go:195] Run: systemctl --version
I0507 18:28:25.271908    6276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-527400 ).state
I0507 18:28:27.350396    6276 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0507 18:28:27.350396    6276 main.go:141] libmachine: [stderr =====>] : 
I0507 18:28:27.350396    6276 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-527400 ).networkadapters[0]).ipaddresses[0]
I0507 18:28:29.806737    6276 main.go:141] libmachine: [stdout =====>] : 172.19.129.80

                                                
                                                
I0507 18:28:29.806737    6276 main.go:141] libmachine: [stderr =====>] : 
I0507 18:28:29.806737    6276 sshutil.go:53] new ssh client: &{IP:172.19.129.80 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-527400\id_rsa Username:docker}
I0507 18:28:29.914823    6276 ssh_runner.go:235] Completed: systemctl --version: (4.6426129s)
I0507 18:28:29.922008    6276 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (6.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (24.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-527400 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-527400 ssh pgrep buildkitd: exit status 1 (9.0985343s)

                                                
                                                
** stderr ** 
	W0507 18:28:25.485859    3228 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-527400 image build -t localhost/my-image:functional-527400 testdata\build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe -p functional-527400 image build -t localhost/my-image:functional-527400 testdata\build --alsologtostderr: (9.1118773s)
functional_test.go:319: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-527400 image build -t localhost/my-image:functional-527400 testdata\build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 9fcad721fa55
---> Removed intermediate container 9fcad721fa55
---> 648db62b0f45
Step 3/3 : ADD content.txt /
---> 74e7effc9051
Successfully built 74e7effc9051
Successfully tagged localhost/my-image:functional-527400
functional_test.go:322: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-527400 image build -t localhost/my-image:functional-527400 testdata\build --alsologtostderr:
W0507 18:28:34.582118   13428 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
I0507 18:28:34.644119   13428 out.go:291] Setting OutFile to fd 920 ...
I0507 18:28:34.663201   13428 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0507 18:28:34.663201   13428 out.go:304] Setting ErrFile to fd 936...
I0507 18:28:34.663201   13428 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0507 18:28:34.684607   13428 config.go:182] Loaded profile config "functional-527400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0507 18:28:34.703595   13428 config.go:182] Loaded profile config "functional-527400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0507 18:28:34.704580   13428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-527400 ).state
I0507 18:28:36.745309   13428 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0507 18:28:36.745309   13428 main.go:141] libmachine: [stderr =====>] : 
I0507 18:28:36.756820   13428 ssh_runner.go:195] Run: systemctl --version
I0507 18:28:36.756820   13428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-527400 ).state
I0507 18:28:38.819622   13428 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0507 18:28:38.819622   13428 main.go:141] libmachine: [stderr =====>] : 
I0507 18:28:38.819622   13428 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-527400 ).networkadapters[0]).ipaddresses[0]
I0507 18:28:41.262558   13428 main.go:141] libmachine: [stdout =====>] : 172.19.129.80

                                                
                                                
I0507 18:28:41.263398   13428 main.go:141] libmachine: [stderr =====>] : 
I0507 18:28:41.263466   13428 sshutil.go:53] new ssh client: &{IP:172.19.129.80 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\functional-527400\id_rsa Username:docker}
I0507 18:28:41.363088   13428 ssh_runner.go:235] Completed: systemctl --version: (4.6059702s)
I0507 18:28:41.363088   13428 build_images.go:161] Building image from path: C:\Users\jenkins.minikube5\AppData\Local\Temp\build.3079300104.tar
I0507 18:28:41.374907   13428 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0507 18:28:41.400961   13428 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3079300104.tar
I0507 18:28:41.408172   13428 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3079300104.tar: stat -c "%s %y" /var/lib/minikube/build/build.3079300104.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3079300104.tar': No such file or directory
I0507 18:28:41.409023   13428 ssh_runner.go:362] scp C:\Users\jenkins.minikube5\AppData\Local\Temp\build.3079300104.tar --> /var/lib/minikube/build/build.3079300104.tar (3072 bytes)
I0507 18:28:41.464116   13428 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3079300104
I0507 18:28:41.491112   13428 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3079300104 -xf /var/lib/minikube/build/build.3079300104.tar
I0507 18:28:41.509525   13428 docker.go:360] Building image: /var/lib/minikube/build/build.3079300104
I0507 18:28:41.516254   13428 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-527400 /var/lib/minikube/build/build.3079300104
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0507 18:28:43.526247   13428 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-527400 /var/lib/minikube/build/build.3079300104: (2.0098022s)
I0507 18:28:43.535219   13428 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3079300104
I0507 18:28:43.564202   13428 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3079300104.tar
I0507 18:28:43.584250   13428 build_images.go:217] Built localhost/my-image:functional-527400 from C:\Users\jenkins.minikube5\AppData\Local\Temp\build.3079300104.tar
I0507 18:28:43.584250   13428 build_images.go:133] succeeded building to: functional-527400
I0507 18:28:43.584250   13428 build_images.go:134] failed building to: 
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-527400 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-527400 image ls: (6.5721001s)
E0507 18:30:23.092948    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\client.crt: The system cannot find the path specified.
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (24.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (4.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (3.8043444s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-527400
--- PASS: TestFunctional/parallel/ImageCommands/Setup (4.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (22.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-527400 image load --daemon gcr.io/google-containers/addon-resizer:functional-527400 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-windows-amd64.exe -p functional-527400 image load --daemon gcr.io/google-containers/addon-resizer:functional-527400 --alsologtostderr: (14.8764117s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-527400 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-527400 image ls: (7.2076489s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (22.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (17.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-527400 image load --daemon gcr.io/google-containers/addon-resizer:functional-527400 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-windows-amd64.exe -p functional-527400 image load --daemon gcr.io/google-containers/addon-resizer:functional-527400 --alsologtostderr: (11.0086601s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-527400 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-527400 image ls: (6.9729718s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (17.98s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (23.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (3.2601367s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-527400
functional_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-527400 image load --daemon gcr.io/google-containers/addon-resizer:functional-527400 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-windows-amd64.exe -p functional-527400 image load --daemon gcr.io/google-containers/addon-resizer:functional-527400 --alsologtostderr: (13.0836371s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-527400 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-527400 image ls: (6.9327023s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (23.49s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (2.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-527400 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-527400 update-context --alsologtostderr -v=2: (2.8117703s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (2.81s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-527400 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-527400 update-context --alsologtostderr -v=2: (2.3130952s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.32s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (2.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-527400 update-context --alsologtostderr -v=2
functional_test.go:2115: (dbg) Done: out/minikube-windows-amd64.exe -p functional-527400 update-context --alsologtostderr -v=2: (2.299642s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (2.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (8.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-527400 image save gcr.io/google-containers/addon-resizer:functional-527400 C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-windows-amd64.exe -p functional-527400 image save gcr.io/google-containers/addon-resizer:functional-527400 C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr: (8.623192s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (8.62s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (8.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-527400 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-527400 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-527400 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-527400 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 7080: OpenProcess: The parameter is incorrect.
helpers_test.go:508: unable to kill pid 13588: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (8.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (15.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-527400 image rm gcr.io/google-containers/addon-resizer:functional-527400 --alsologtostderr
functional_test.go:391: (dbg) Done: out/minikube-windows-amd64.exe -p functional-527400 image rm gcr.io/google-containers/addon-resizer:functional-527400 --alsologtostderr: (7.822545s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-527400 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-527400 image ls: (7.2399682s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (15.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-527400 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (13.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-527400 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [a0e5038f-f0df-4b3c-a1f8-ac52dc89f73f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [a0e5038f-f0df-4b3c-a1f8-ac52dc89f73f] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 13.0122158s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (13.51s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (15.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-527400 image load C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-windows-amd64.exe -p functional-527400 image load C:\jenkins\workspace\Hyper-V_Windows_integration\addon-resizer-save.tar --alsologtostderr: (8.9157446s)
functional_test.go:447: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-527400 image ls
functional_test.go:447: (dbg) Done: out/minikube-windows-amd64.exe -p functional-527400 image ls: (6.8183212s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (15.73s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-527400 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 6912: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (10.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-527400
functional_test.go:423: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-527400 image save --daemon gcr.io/google-containers/addon-resizer:functional-527400 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-windows-amd64.exe -p functional-527400 image save --daemon gcr.io/google-containers/addon-resizer:functional-527400 --alsologtostderr: (10.3878383s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-527400
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (10.72s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.4s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-527400
--- PASS: TestFunctional/delete_addon-resizer_images (0.40s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.15s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-527400
--- PASS: TestFunctional/delete_my-image_image (0.15s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.15s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-527400
--- PASS: TestFunctional/delete_minikube_cached_images (0.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (648.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p ha-210800 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv
E0507 18:35:01.122872    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-527400\client.crt: The system cannot find the path specified.
E0507 18:35:01.138717    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-527400\client.crt: The system cannot find the path specified.
E0507 18:35:01.154480    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-527400\client.crt: The system cannot find the path specified.
E0507 18:35:01.186078    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-527400\client.crt: The system cannot find the path specified.
E0507 18:35:01.234088    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-527400\client.crt: The system cannot find the path specified.
E0507 18:35:01.328022    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-527400\client.crt: The system cannot find the path specified.
E0507 18:35:01.500704    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-527400\client.crt: The system cannot find the path specified.
E0507 18:35:01.824499    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-527400\client.crt: The system cannot find the path specified.
E0507 18:35:02.475668    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-527400\client.crt: The system cannot find the path specified.
E0507 18:35:03.768660    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-527400\client.crt: The system cannot find the path specified.
E0507 18:35:06.343104    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-527400\client.crt: The system cannot find the path specified.
E0507 18:35:11.475156    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-527400\client.crt: The system cannot find the path specified.
E0507 18:35:21.716543    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-527400\client.crt: The system cannot find the path specified.
E0507 18:35:23.113544    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\client.crt: The system cannot find the path specified.
E0507 18:35:42.204295    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-527400\client.crt: The system cannot find the path specified.
E0507 18:36:23.175873    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-527400\client.crt: The system cannot find the path specified.
E0507 18:37:45.116204    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-527400\client.crt: The system cannot find the path specified.
E0507 18:40:01.141290    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-527400\client.crt: The system cannot find the path specified.
E0507 18:40:23.133521    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\client.crt: The system cannot find the path specified.
E0507 18:40:28.980924    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-527400\client.crt: The system cannot find the path specified.
ha_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p ha-210800 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv: (10m15.9293798s)
ha_test.go:107: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-210800 status -v=7 --alsologtostderr
ha_test.go:107: (dbg) Done: out/minikube-windows-amd64.exe -p ha-210800 status -v=7 --alsologtostderr: (32.3464204s)
--- PASS: TestMultiControlPlane/serial/StartCluster (648.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (10.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-210800 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-210800 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-210800 -- rollout status deployment/busybox: (3.3475462s)
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-210800 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-210800 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-210800 -- exec busybox-fc5497c4f-45d7p -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-210800 -- exec busybox-fc5497c4f-45d7p -- nslookup kubernetes.io: (1.6535587s)
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-210800 -- exec busybox-fc5497c4f-5z998 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-210800 -- exec busybox-fc5497c4f-5z998 -- nslookup kubernetes.io: (1.5811913s)
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-210800 -- exec busybox-fc5497c4f-pkgxl -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-210800 -- exec busybox-fc5497c4f-45d7p -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-210800 -- exec busybox-fc5497c4f-5z998 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-210800 -- exec busybox-fc5497c4f-pkgxl -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-210800 -- exec busybox-fc5497c4f-45d7p -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-210800 -- exec busybox-fc5497c4f-5z998 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-210800 -- exec busybox-fc5497c4f-pkgxl -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (10.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (228.47s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe node add -p ha-210800 -v=7 --alsologtostderr
E0507 18:45:01.152892    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-527400\client.crt: The system cannot find the path specified.
E0507 18:45:23.149343    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\client.crt: The system cannot find the path specified.
ha_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe node add -p ha-210800 -v=7 --alsologtostderr: (3m5.5476236s)
ha_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-210800 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-windows-amd64.exe -p ha-210800 status -v=7 --alsologtostderr: (42.9207049s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (228.47s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.17s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-210800 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.17s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (25.42s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (25.4183116s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (25.42s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (560.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-210800 status --output json -v=7 --alsologtostderr
ha_test.go:326: (dbg) Done: out/minikube-windows-amd64.exe -p ha-210800 status --output json -v=7 --alsologtostderr: (43.2214972s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-210800 cp testdata\cp-test.txt ha-210800:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-210800 cp testdata\cp-test.txt ha-210800:/home/docker/cp-test.txt: (8.5089225s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-210800 ssh -n ha-210800 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-210800 ssh -n ha-210800 "sudo cat /home/docker/cp-test.txt": (8.430398s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-210800 cp ha-210800:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3684481978\001\cp-test_ha-210800.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-210800 cp ha-210800:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3684481978\001\cp-test_ha-210800.txt: (8.4663258s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-210800 ssh -n ha-210800 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-210800 ssh -n ha-210800 "sudo cat /home/docker/cp-test.txt": (8.4016801s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-210800 cp ha-210800:/home/docker/cp-test.txt ha-210800-m02:/home/docker/cp-test_ha-210800_ha-210800-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-210800 cp ha-210800:/home/docker/cp-test.txt ha-210800-m02:/home/docker/cp-test_ha-210800_ha-210800-m02.txt: (14.6151843s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-210800 ssh -n ha-210800 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-210800 ssh -n ha-210800 "sudo cat /home/docker/cp-test.txt": (8.4790609s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-210800 ssh -n ha-210800-m02 "sudo cat /home/docker/cp-test_ha-210800_ha-210800-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-210800 ssh -n ha-210800-m02 "sudo cat /home/docker/cp-test_ha-210800_ha-210800-m02.txt": (8.4232476s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-210800 cp ha-210800:/home/docker/cp-test.txt ha-210800-m03:/home/docker/cp-test_ha-210800_ha-210800-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-210800 cp ha-210800:/home/docker/cp-test.txt ha-210800-m03:/home/docker/cp-test_ha-210800_ha-210800-m03.txt: (14.8069971s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-210800 ssh -n ha-210800 "sudo cat /home/docker/cp-test.txt"
E0507 18:50:01.185219    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-527400\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-210800 ssh -n ha-210800 "sudo cat /home/docker/cp-test.txt": (8.6108131s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-210800 ssh -n ha-210800-m03 "sudo cat /home/docker/cp-test_ha-210800_ha-210800-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-210800 ssh -n ha-210800-m03 "sudo cat /home/docker/cp-test_ha-210800_ha-210800-m03.txt": (8.3914498s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-210800 cp ha-210800:/home/docker/cp-test.txt ha-210800-m04:/home/docker/cp-test_ha-210800_ha-210800-m04.txt
E0507 18:50:23.168727    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-210800 cp ha-210800:/home/docker/cp-test.txt ha-210800-m04:/home/docker/cp-test_ha-210800_ha-210800-m04.txt: (14.8558718s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-210800 ssh -n ha-210800 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-210800 ssh -n ha-210800 "sudo cat /home/docker/cp-test.txt": (8.4464583s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-210800 ssh -n ha-210800-m04 "sudo cat /home/docker/cp-test_ha-210800_ha-210800-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-210800 ssh -n ha-210800-m04 "sudo cat /home/docker/cp-test_ha-210800_ha-210800-m04.txt": (8.4160339s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-210800 cp testdata\cp-test.txt ha-210800-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-210800 cp testdata\cp-test.txt ha-210800-m02:/home/docker/cp-test.txt: (8.3907609s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-210800 ssh -n ha-210800-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-210800 ssh -n ha-210800-m02 "sudo cat /home/docker/cp-test.txt": (8.5372602s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-210800 cp ha-210800-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3684481978\001\cp-test_ha-210800-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-210800 cp ha-210800-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3684481978\001\cp-test_ha-210800-m02.txt: (8.6340704s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-210800 ssh -n ha-210800-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-210800 ssh -n ha-210800-m02 "sudo cat /home/docker/cp-test.txt": (8.5339164s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-210800 cp ha-210800-m02:/home/docker/cp-test.txt ha-210800:/home/docker/cp-test_ha-210800-m02_ha-210800.txt
E0507 18:51:24.386456    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-527400\client.crt: The system cannot find the path specified.
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-210800 cp ha-210800-m02:/home/docker/cp-test.txt ha-210800:/home/docker/cp-test_ha-210800-m02_ha-210800.txt: (14.8903283s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-210800 ssh -n ha-210800-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-210800 ssh -n ha-210800-m02 "sudo cat /home/docker/cp-test.txt": (8.4716459s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-210800 ssh -n ha-210800 "sudo cat /home/docker/cp-test_ha-210800-m02_ha-210800.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-210800 ssh -n ha-210800 "sudo cat /home/docker/cp-test_ha-210800-m02_ha-210800.txt": (8.4999463s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-210800 cp ha-210800-m02:/home/docker/cp-test.txt ha-210800-m03:/home/docker/cp-test_ha-210800-m02_ha-210800-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-210800 cp ha-210800-m02:/home/docker/cp-test.txt ha-210800-m03:/home/docker/cp-test_ha-210800-m02_ha-210800-m03.txt: (14.792038s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-210800 ssh -n ha-210800-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-210800 ssh -n ha-210800-m02 "sudo cat /home/docker/cp-test.txt": (8.4147245s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-210800 ssh -n ha-210800-m03 "sudo cat /home/docker/cp-test_ha-210800-m02_ha-210800-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-210800 ssh -n ha-210800-m03 "sudo cat /home/docker/cp-test_ha-210800-m02_ha-210800-m03.txt": (8.4461669s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-210800 cp ha-210800-m02:/home/docker/cp-test.txt ha-210800-m04:/home/docker/cp-test_ha-210800-m02_ha-210800-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-210800 cp ha-210800-m02:/home/docker/cp-test.txt ha-210800-m04:/home/docker/cp-test_ha-210800-m02_ha-210800-m04.txt: (14.7628016s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-210800 ssh -n ha-210800-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-210800 ssh -n ha-210800-m02 "sudo cat /home/docker/cp-test.txt": (8.4807923s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-210800 ssh -n ha-210800-m04 "sudo cat /home/docker/cp-test_ha-210800-m02_ha-210800-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-210800 ssh -n ha-210800-m04 "sudo cat /home/docker/cp-test_ha-210800-m02_ha-210800-m04.txt": (8.4844012s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-210800 cp testdata\cp-test.txt ha-210800-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-210800 cp testdata\cp-test.txt ha-210800-m03:/home/docker/cp-test.txt: (8.4874296s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-210800 ssh -n ha-210800-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-210800 ssh -n ha-210800-m03 "sudo cat /home/docker/cp-test.txt": (8.4524656s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-210800 cp ha-210800-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3684481978\001\cp-test_ha-210800-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-210800 cp ha-210800-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3684481978\001\cp-test_ha-210800-m03.txt: (8.541321s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-210800 ssh -n ha-210800-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-210800 ssh -n ha-210800-m03 "sudo cat /home/docker/cp-test.txt": (8.5064405s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-210800 cp ha-210800-m03:/home/docker/cp-test.txt ha-210800:/home/docker/cp-test_ha-210800-m03_ha-210800.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-210800 cp ha-210800-m03:/home/docker/cp-test.txt ha-210800:/home/docker/cp-test_ha-210800-m03_ha-210800.txt: (14.7504405s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-210800 ssh -n ha-210800-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-210800 ssh -n ha-210800-m03 "sudo cat /home/docker/cp-test.txt": (8.4719363s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-210800 ssh -n ha-210800 "sudo cat /home/docker/cp-test_ha-210800-m03_ha-210800.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-210800 ssh -n ha-210800 "sudo cat /home/docker/cp-test_ha-210800-m03_ha-210800.txt": (8.4764277s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-210800 cp ha-210800-m03:/home/docker/cp-test.txt ha-210800-m02:/home/docker/cp-test_ha-210800-m03_ha-210800-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-210800 cp ha-210800-m03:/home/docker/cp-test.txt ha-210800-m02:/home/docker/cp-test_ha-210800-m03_ha-210800-m02.txt: (14.7108326s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-210800 ssh -n ha-210800-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-210800 ssh -n ha-210800-m03 "sudo cat /home/docker/cp-test.txt": (8.4275865s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-210800 ssh -n ha-210800-m02 "sudo cat /home/docker/cp-test_ha-210800-m03_ha-210800-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-210800 ssh -n ha-210800-m02 "sudo cat /home/docker/cp-test_ha-210800-m03_ha-210800-m02.txt": (8.5173374s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-210800 cp ha-210800-m03:/home/docker/cp-test.txt ha-210800-m04:/home/docker/cp-test_ha-210800-m03_ha-210800-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-210800 cp ha-210800-m03:/home/docker/cp-test.txt ha-210800-m04:/home/docker/cp-test_ha-210800-m03_ha-210800-m04.txt: (14.7112914s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-210800 ssh -n ha-210800-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-210800 ssh -n ha-210800-m03 "sudo cat /home/docker/cp-test.txt": (8.4662452s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-210800 ssh -n ha-210800-m04 "sudo cat /home/docker/cp-test_ha-210800-m03_ha-210800-m04.txt"
E0507 18:55:01.196224    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-527400\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-210800 ssh -n ha-210800-m04 "sudo cat /home/docker/cp-test_ha-210800-m03_ha-210800-m04.txt": (8.4401125s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-210800 cp testdata\cp-test.txt ha-210800-m04:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-210800 cp testdata\cp-test.txt ha-210800-m04:/home/docker/cp-test.txt: (8.523652s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-210800 ssh -n ha-210800-m04 "sudo cat /home/docker/cp-test.txt"
E0507 18:55:23.202942    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-210800 ssh -n ha-210800-m04 "sudo cat /home/docker/cp-test.txt": (8.5074661s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-210800 cp ha-210800-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3684481978\001\cp-test_ha-210800-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-210800 cp ha-210800-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile3684481978\001\cp-test_ha-210800-m04.txt: (8.3875333s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-210800 ssh -n ha-210800-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-210800 ssh -n ha-210800-m04 "sudo cat /home/docker/cp-test.txt": (8.4319188s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-210800 cp ha-210800-m04:/home/docker/cp-test.txt ha-210800:/home/docker/cp-test_ha-210800-m04_ha-210800.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-210800 cp ha-210800-m04:/home/docker/cp-test.txt ha-210800:/home/docker/cp-test_ha-210800-m04_ha-210800.txt: (14.8091912s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-210800 ssh -n ha-210800-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-210800 ssh -n ha-210800-m04 "sudo cat /home/docker/cp-test.txt": (8.5938619s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-210800 ssh -n ha-210800 "sudo cat /home/docker/cp-test_ha-210800-m04_ha-210800.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-210800 ssh -n ha-210800 "sudo cat /home/docker/cp-test_ha-210800-m04_ha-210800.txt": (8.5865066s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-210800 cp ha-210800-m04:/home/docker/cp-test.txt ha-210800-m02:/home/docker/cp-test_ha-210800-m04_ha-210800-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-210800 cp ha-210800-m04:/home/docker/cp-test.txt ha-210800-m02:/home/docker/cp-test_ha-210800-m04_ha-210800-m02.txt: (14.9868736s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-210800 ssh -n ha-210800-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-210800 ssh -n ha-210800-m04 "sudo cat /home/docker/cp-test.txt": (8.5955532s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-210800 ssh -n ha-210800-m02 "sudo cat /home/docker/cp-test_ha-210800-m04_ha-210800-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-210800 ssh -n ha-210800-m02 "sudo cat /home/docker/cp-test_ha-210800-m04_ha-210800-m02.txt": (8.4434915s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-210800 cp ha-210800-m04:/home/docker/cp-test.txt ha-210800-m03:/home/docker/cp-test_ha-210800-m04_ha-210800-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-210800 cp ha-210800-m04:/home/docker/cp-test.txt ha-210800-m03:/home/docker/cp-test_ha-210800-m04_ha-210800-m03.txt: (14.8012259s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-210800 ssh -n ha-210800-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-210800 ssh -n ha-210800-m04 "sudo cat /home/docker/cp-test.txt": (8.4290946s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-210800 ssh -n ha-210800-m03 "sudo cat /home/docker/cp-test_ha-210800-m04_ha-210800-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-210800 ssh -n ha-210800-m03 "sudo cat /home/docker/cp-test_ha-210800-m04_ha-210800-m03.txt": (8.5161337s)
--- PASS: TestMultiControlPlane/serial/CopyFile (560.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (67.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-210800 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-windows-amd64.exe -p ha-210800 node stop m02 -v=7 --alsologtostderr: (32.8803864s)
ha_test.go:369: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-210800 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-210800 status -v=7 --alsologtostderr: exit status 7 (34.1373491s)

                                                
                                                
-- stdout --
	ha-210800
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-210800-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-210800-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-210800-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0507 18:57:50.261664   11108 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0507 18:57:50.323527   11108 out.go:291] Setting OutFile to fd 1008 ...
	I0507 18:57:50.324494   11108 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 18:57:50.324494   11108 out.go:304] Setting ErrFile to fd 964...
	I0507 18:57:50.324494   11108 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 18:57:50.338289   11108 out.go:298] Setting JSON to false
	I0507 18:57:50.338838   11108 mustload.go:65] Loading cluster: ha-210800
	I0507 18:57:50.338838   11108 notify.go:220] Checking for updates...
	I0507 18:57:50.339364   11108 config.go:182] Loaded profile config "ha-210800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 18:57:50.339364   11108 status.go:255] checking status of ha-210800 ...
	I0507 18:57:50.340272   11108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800 ).state
	I0507 18:57:52.322030   11108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:57:52.322030   11108 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:57:52.322130   11108 status.go:330] ha-210800 host status = "Running" (err=<nil>)
	I0507 18:57:52.322130   11108 host.go:66] Checking if "ha-210800" exists ...
	I0507 18:57:52.322742   11108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800 ).state
	I0507 18:57:54.293484   11108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:57:54.293484   11108 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:57:54.293590   11108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800 ).networkadapters[0]).ipaddresses[0]
	I0507 18:57:56.639237   11108 main.go:141] libmachine: [stdout =====>] : 172.19.132.69
	
	I0507 18:57:56.639907   11108 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:57:56.639907   11108 host.go:66] Checking if "ha-210800" exists ...
	I0507 18:57:56.648880   11108 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0507 18:57:56.648975   11108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800 ).state
	I0507 18:57:58.642442   11108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:57:58.642442   11108 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:57:58.642685   11108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800 ).networkadapters[0]).ipaddresses[0]
	I0507 18:58:01.030526   11108 main.go:141] libmachine: [stdout =====>] : 172.19.132.69
	
	I0507 18:58:01.030526   11108 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:58:01.031196   11108 sshutil.go:53] new ssh client: &{IP:172.19.132.69 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800\id_rsa Username:docker}
	I0507 18:58:01.141714   11108 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.4924354s)
	I0507 18:58:01.150810   11108 ssh_runner.go:195] Run: systemctl --version
	I0507 18:58:01.170751   11108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0507 18:58:01.196836   11108 kubeconfig.go:125] found "ha-210800" server: "https://172.19.143.254:8443"
	I0507 18:58:01.196907   11108 api_server.go:166] Checking apiserver status ...
	I0507 18:58:01.205231   11108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0507 18:58:01.241253   11108 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2102/cgroup
	W0507 18:58:01.259421   11108 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2102/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0507 18:58:01.270630   11108 ssh_runner.go:195] Run: ls
	I0507 18:58:01.282801   11108 api_server.go:253] Checking apiserver healthz at https://172.19.143.254:8443/healthz ...
	I0507 18:58:01.291944   11108 api_server.go:279] https://172.19.143.254:8443/healthz returned 200:
	ok
	I0507 18:58:01.292046   11108 status.go:422] ha-210800 apiserver status = Running (err=<nil>)
	I0507 18:58:01.292046   11108 status.go:257] ha-210800 status: &{Name:ha-210800 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0507 18:58:01.292046   11108 status.go:255] checking status of ha-210800-m02 ...
	I0507 18:58:01.292760   11108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m02 ).state
	I0507 18:58:03.258953   11108 main.go:141] libmachine: [stdout =====>] : Off
	
	I0507 18:58:03.258953   11108 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:58:03.258953   11108 status.go:330] ha-210800-m02 host status = "Stopped" (err=<nil>)
	I0507 18:58:03.258953   11108 status.go:343] host is not running, skipping remaining checks
	I0507 18:58:03.258953   11108 status.go:257] ha-210800-m02 status: &{Name:ha-210800-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0507 18:58:03.259055   11108 status.go:255] checking status of ha-210800-m03 ...
	I0507 18:58:03.259587   11108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m03 ).state
	I0507 18:58:05.194335   11108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:58:05.194335   11108 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:58:05.194335   11108 status.go:330] ha-210800-m03 host status = "Running" (err=<nil>)
	I0507 18:58:05.195296   11108 host.go:66] Checking if "ha-210800-m03" exists ...
	I0507 18:58:05.195996   11108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m03 ).state
	I0507 18:58:07.132058   11108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:58:07.132058   11108 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:58:07.132058   11108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m03 ).networkadapters[0]).ipaddresses[0]
	I0507 18:58:09.418383   11108 main.go:141] libmachine: [stdout =====>] : 172.19.137.224
	
	I0507 18:58:09.418383   11108 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:58:09.418383   11108 host.go:66] Checking if "ha-210800-m03" exists ...
	I0507 18:58:09.427759   11108 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0507 18:58:09.427759   11108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m03 ).state
	I0507 18:58:11.349961   11108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:58:11.350267   11108 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:58:11.350267   11108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m03 ).networkadapters[0]).ipaddresses[0]
	I0507 18:58:13.630639   11108 main.go:141] libmachine: [stdout =====>] : 172.19.137.224
	
	I0507 18:58:13.630639   11108 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:58:13.630852   11108 sshutil.go:53] new ssh client: &{IP:172.19.137.224 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800-m03\id_rsa Username:docker}
	I0507 18:58:13.741493   11108 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.3134432s)
	I0507 18:58:13.750869   11108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0507 18:58:13.778207   11108 kubeconfig.go:125] found "ha-210800" server: "https://172.19.143.254:8443"
	I0507 18:58:13.778207   11108 api_server.go:166] Checking apiserver status ...
	I0507 18:58:13.787136   11108 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0507 18:58:13.819240   11108 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2260/cgroup
	W0507 18:58:13.836649   11108 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2260/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0507 18:58:13.845815   11108 ssh_runner.go:195] Run: ls
	I0507 18:58:13.853064   11108 api_server.go:253] Checking apiserver healthz at https://172.19.143.254:8443/healthz ...
	I0507 18:58:13.861651   11108 api_server.go:279] https://172.19.143.254:8443/healthz returned 200:
	ok
	I0507 18:58:13.861651   11108 status.go:422] ha-210800-m03 apiserver status = Running (err=<nil>)
	I0507 18:58:13.861651   11108 status.go:257] ha-210800-m03 status: &{Name:ha-210800-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0507 18:58:13.862188   11108 status.go:255] checking status of ha-210800-m04 ...
	I0507 18:58:13.862814   11108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m04 ).state
	I0507 18:58:15.748126   11108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:58:15.748954   11108 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:58:15.749031   11108 status.go:330] ha-210800-m04 host status = "Running" (err=<nil>)
	I0507 18:58:15.749031   11108 host.go:66] Checking if "ha-210800-m04" exists ...
	I0507 18:58:15.749579   11108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m04 ).state
	I0507 18:58:17.667144   11108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:58:17.667200   11108 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:58:17.667200   11108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m04 ).networkadapters[0]).ipaddresses[0]
	I0507 18:58:19.938426   11108 main.go:141] libmachine: [stdout =====>] : 172.19.129.171
	
	I0507 18:58:19.938426   11108 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:58:19.938505   11108 host.go:66] Checking if "ha-210800-m04" exists ...
	I0507 18:58:19.946603   11108 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0507 18:58:19.946603   11108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-210800-m04 ).state
	I0507 18:58:21.847687   11108 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 18:58:21.847687   11108 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:58:21.847756   11108 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-210800-m04 ).networkadapters[0]).ipaddresses[0]
	I0507 18:58:24.132434   11108 main.go:141] libmachine: [stdout =====>] : 172.19.129.171
	
	I0507 18:58:24.132434   11108 main.go:141] libmachine: [stderr =====>] : 
	I0507 18:58:24.133474   11108 sshutil.go:53] new ssh client: &{IP:172.19.129.171 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\ha-210800-m04\id_rsa Username:docker}
	I0507 18:58:24.238678   11108 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.2917856s)
	I0507 18:58:24.248498   11108 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0507 18:58:24.271615   11108 status.go:257] ha-210800-m04 status: &{Name:ha-210800-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (67.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (18.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (18.8549165s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (18.86s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (184.46s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -p image-048300 --driver=hyperv
E0507 19:08:04.456497    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-527400\client.crt: The system cannot find the path specified.
image_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -p image-048300 --driver=hyperv: (3m4.4560928s)
--- PASS: TestImageBuild/serial/Setup (184.46s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (8.69s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-048300
image_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-048300: (8.6939231s)
--- PASS: TestImageBuild/serial/NormalBuild (8.69s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (7.97s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-048300
image_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-048300: (7.9736229s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (7.97s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (6.93s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-048300
image_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-048300: (6.9282624s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (6.93s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (6.78s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-048300
image_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-048300: (6.7763877s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (6.78s)

                                                
                                    
x
+
TestJSONOutput/start/Command (224.86s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-947100 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv
E0507 19:10:01.253234    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-527400\client.crt: The system cannot find the path specified.
E0507 19:10:23.259346    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\client.crt: The system cannot find the path specified.
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-947100 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv: (3m44.8541333s)
--- PASS: TestJSONOutput/start/Command (224.86s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (6.93s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-947100 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-947100 --output=json --user=testUser: (6.9304647s)
--- PASS: TestJSONOutput/pause/Command (6.93s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (6.97s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-947100 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-947100 --output=json --user=testUser: (6.9672946s)
--- PASS: TestJSONOutput/unpause/Command (6.97s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (32.7s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-947100 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-947100 --output=json --user=testUser: (32.6964649s)
--- PASS: TestJSONOutput/stop/Command (32.70s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (1.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-311700 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-311700 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (206.7088ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"65d194a3-1f16-49ca-8dba-c23864fc2454","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-311700] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"12f4ef8f-3506-4cbe-972d-51578ec067af","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube5\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"920fd4e5-8209-4655-a97a-d854a5b49c4a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"43948731-edc6-435f-b872-cab04dea92c9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube5\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"76c5dcc3-971a-4695-8850-c18520ab2e93","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18804"}}
	{"specversion":"1.0","id":"5447cc6a-8e11-4a8e-aee9-a451f6834825","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b9ad3c38-5a2f-421a-bc60-404c75ed7588","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
** stderr ** 
	W0507 19:14:36.111942    3764 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "json-output-error-311700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-311700
--- PASS: TestErrorJSONOutput (1.20s)

                                                
                                    
x
+
TestMainNoArgs (0.17s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.17s)

                                                
                                    
x
+
TestMinikubeProfile (483.64s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-963100 --driver=hyperv
E0507 19:15:01.279304    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-527400\client.crt: The system cannot find the path specified.
E0507 19:15:23.277322    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\client.crt: The system cannot find the path specified.
E0507 19:16:46.499126    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-963100 --driver=hyperv: (2m57.2956218s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-963100 --driver=hyperv
E0507 19:20:01.299386    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-527400\client.crt: The system cannot find the path specified.
E0507 19:20:23.296628    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\client.crt: The system cannot find the path specified.
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-963100 --driver=hyperv: (2m58.7910648s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-963100
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (18.95913s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-963100
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (19.1393723s)
helpers_test.go:175: Cleaning up "second-963100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-963100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-963100: (44.4780786s)
helpers_test.go:175: Cleaning up "first-963100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-963100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-963100: (44.2753502s)
--- PASS: TestMinikubeProfile (483.64s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (138.14s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-200900 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv
E0507 19:24:44.538632    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-527400\client.crt: The system cannot find the path specified.
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-200900 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m17.1353308s)
--- PASS: TestMountStart/serial/StartWithMountFirst (138.14s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (8.63s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-200900 ssh -- ls /minikube-host
E0507 19:25:01.313194    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-527400\client.crt: The system cannot find the path specified.
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-1-200900 ssh -- ls /minikube-host: (8.6265155s)
--- PASS: TestMountStart/serial/VerifyMountFirst (8.63s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (137.73s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-200900 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv
E0507 19:25:23.311979    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\client.crt: The system cannot find the path specified.
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-200900 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m16.712646s)
--- PASS: TestMountStart/serial/StartWithMountSecond (137.73s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (8.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-200900 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-200900 ssh -- ls /minikube-host: (8.4060728s)
--- PASS: TestMountStart/serial/VerifyMountSecond (8.41s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (25.13s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-200900 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-200900 --alsologtostderr -v=5: (25.1348617s)
--- PASS: TestMountStart/serial/DeleteFirst (25.13s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (8.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-200900 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-200900 ssh -- ls /minikube-host: (8.4178384s)
--- PASS: TestMountStart/serial/VerifyMountPostDelete (8.42s)

                                                
                                    
x
+
TestMountStart/serial/Stop (27.37s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-200900
mount_start_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-200900: (27.3672453s)
--- PASS: TestMountStart/serial/Stop (27.37s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (105.96s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-200900
E0507 19:30:01.345370    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-527400\client.crt: The system cannot find the path specified.
mount_start_test.go:166: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-200900: (1m44.9583705s)
--- PASS: TestMountStart/serial/RestartStopped (105.96s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (8.68s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-200900 ssh -- ls /minikube-host
E0507 19:30:23.339056    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\client.crt: The system cannot find the path specified.
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-200900 ssh -- ls /minikube-host: (8.6828121s)
--- PASS: TestMountStart/serial/VerifyMountPostStop (8.68s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (386.58s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-600000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv
E0507 19:33:26.570186    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\client.crt: The system cannot find the path specified.
E0507 19:35:01.355828    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-527400\client.crt: The system cannot find the path specified.
E0507 19:35:23.353349    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\client.crt: The system cannot find the path specified.
multinode_test.go:96: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-600000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv: (6m5.6473426s)
multinode_test.go:102: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-600000 status --alsologtostderr
multinode_test.go:102: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-600000 status --alsologtostderr: (20.9287533s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (386.58s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (8.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-600000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-600000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-600000 -- rollout status deployment/busybox: (2.8717764s)
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-600000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-600000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-600000 -- exec busybox-fc5497c4f-cpw2r -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-600000 -- exec busybox-fc5497c4f-cpw2r -- nslookup kubernetes.io: (2.0705556s)
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-600000 -- exec busybox-fc5497c4f-gcqlv -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-600000 -- exec busybox-fc5497c4f-cpw2r -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-600000 -- exec busybox-fc5497c4f-gcqlv -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-600000 -- exec busybox-fc5497c4f-cpw2r -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-600000 -- exec busybox-fc5497c4f-gcqlv -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (8.21s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (204.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-600000 -v 3 --alsologtostderr
E0507 19:40:01.372707    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-527400\client.crt: The system cannot find the path specified.
E0507 19:40:23.375448    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\client.crt: The system cannot find the path specified.
multinode_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-600000 -v 3 --alsologtostderr: (2m53.0168678s)
multinode_test.go:127: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-600000 status --alsologtostderr
E0507 19:41:24.614784    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-527400\client.crt: The system cannot find the path specified.
multinode_test.go:127: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-600000 status --alsologtostderr: (31.1886534s)
--- PASS: TestMultiNode/serial/AddNode (204.21s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-600000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.15s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (10.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:143: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (10.3262571s)
--- PASS: TestMultiNode/serial/ProfileList (10.33s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (318.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-600000 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-600000 status --output json --alsologtostderr: (31.3370803s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-600000 cp testdata\cp-test.txt multinode-600000:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-600000 cp testdata\cp-test.txt multinode-600000:/home/docker/cp-test.txt: (8.3119478s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-600000 ssh -n multinode-600000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-600000 ssh -n multinode-600000 "sudo cat /home/docker/cp-test.txt": (8.2952481s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-600000 cp multinode-600000:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiNodeserialCopyFile2685173768\001\cp-test_multinode-600000.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-600000 cp multinode-600000:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiNodeserialCopyFile2685173768\001\cp-test_multinode-600000.txt: (8.2862402s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-600000 ssh -n multinode-600000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-600000 ssh -n multinode-600000 "sudo cat /home/docker/cp-test.txt": (8.2767384s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-600000 cp multinode-600000:/home/docker/cp-test.txt multinode-600000-m02:/home/docker/cp-test_multinode-600000_multinode-600000-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-600000 cp multinode-600000:/home/docker/cp-test.txt multinode-600000-m02:/home/docker/cp-test_multinode-600000_multinode-600000-m02.txt: (14.3430572s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-600000 ssh -n multinode-600000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-600000 ssh -n multinode-600000 "sudo cat /home/docker/cp-test.txt": (8.2447619s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-600000 ssh -n multinode-600000-m02 "sudo cat /home/docker/cp-test_multinode-600000_multinode-600000-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-600000 ssh -n multinode-600000-m02 "sudo cat /home/docker/cp-test_multinode-600000_multinode-600000-m02.txt": (8.2315094s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-600000 cp multinode-600000:/home/docker/cp-test.txt multinode-600000-m03:/home/docker/cp-test_multinode-600000_multinode-600000-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-600000 cp multinode-600000:/home/docker/cp-test.txt multinode-600000-m03:/home/docker/cp-test_multinode-600000_multinode-600000-m03.txt: (14.4774788s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-600000 ssh -n multinode-600000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-600000 ssh -n multinode-600000 "sudo cat /home/docker/cp-test.txt": (8.2180412s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-600000 ssh -n multinode-600000-m03 "sudo cat /home/docker/cp-test_multinode-600000_multinode-600000-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-600000 ssh -n multinode-600000-m03 "sudo cat /home/docker/cp-test_multinode-600000_multinode-600000-m03.txt": (8.3833043s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-600000 cp testdata\cp-test.txt multinode-600000-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-600000 cp testdata\cp-test.txt multinode-600000-m02:/home/docker/cp-test.txt: (8.2588901s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-600000 ssh -n multinode-600000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-600000 ssh -n multinode-600000-m02 "sudo cat /home/docker/cp-test.txt": (8.2400014s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-600000 cp multinode-600000-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiNodeserialCopyFile2685173768\001\cp-test_multinode-600000-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-600000 cp multinode-600000-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiNodeserialCopyFile2685173768\001\cp-test_multinode-600000-m02.txt: (8.2636132s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-600000 ssh -n multinode-600000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-600000 ssh -n multinode-600000-m02 "sudo cat /home/docker/cp-test.txt": (8.2897511s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-600000 cp multinode-600000-m02:/home/docker/cp-test.txt multinode-600000:/home/docker/cp-test_multinode-600000-m02_multinode-600000.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-600000 cp multinode-600000-m02:/home/docker/cp-test.txt multinode-600000:/home/docker/cp-test_multinode-600000-m02_multinode-600000.txt: (14.331328s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-600000 ssh -n multinode-600000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-600000 ssh -n multinode-600000-m02 "sudo cat /home/docker/cp-test.txt": (8.3233489s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-600000 ssh -n multinode-600000 "sudo cat /home/docker/cp-test_multinode-600000-m02_multinode-600000.txt"
E0507 19:45:01.408137    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-527400\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-600000 ssh -n multinode-600000 "sudo cat /home/docker/cp-test_multinode-600000-m02_multinode-600000.txt": (8.4309192s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-600000 cp multinode-600000-m02:/home/docker/cp-test.txt multinode-600000-m03:/home/docker/cp-test_multinode-600000-m02_multinode-600000-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-600000 cp multinode-600000-m02:/home/docker/cp-test.txt multinode-600000-m03:/home/docker/cp-test_multinode-600000-m02_multinode-600000-m03.txt: (14.737421s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-600000 ssh -n multinode-600000-m02 "sudo cat /home/docker/cp-test.txt"
E0507 19:45:23.391871    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\client.crt: The system cannot find the path specified.
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-600000 ssh -n multinode-600000-m02 "sudo cat /home/docker/cp-test.txt": (8.3653942s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-600000 ssh -n multinode-600000-m03 "sudo cat /home/docker/cp-test_multinode-600000-m02_multinode-600000-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-600000 ssh -n multinode-600000-m03 "sudo cat /home/docker/cp-test_multinode-600000-m02_multinode-600000-m03.txt": (8.3736274s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-600000 cp testdata\cp-test.txt multinode-600000-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-600000 cp testdata\cp-test.txt multinode-600000-m03:/home/docker/cp-test.txt: (8.4267207s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-600000 ssh -n multinode-600000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-600000 ssh -n multinode-600000-m03 "sudo cat /home/docker/cp-test.txt": (8.3754404s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-600000 cp multinode-600000-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiNodeserialCopyFile2685173768\001\cp-test_multinode-600000-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-600000 cp multinode-600000-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube5\AppData\Local\Temp\TestMultiNodeserialCopyFile2685173768\001\cp-test_multinode-600000-m03.txt: (8.5002771s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-600000 ssh -n multinode-600000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-600000 ssh -n multinode-600000-m03 "sudo cat /home/docker/cp-test.txt": (8.4862824s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-600000 cp multinode-600000-m03:/home/docker/cp-test.txt multinode-600000:/home/docker/cp-test_multinode-600000-m03_multinode-600000.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-600000 cp multinode-600000-m03:/home/docker/cp-test.txt multinode-600000:/home/docker/cp-test_multinode-600000-m03_multinode-600000.txt: (14.6893226s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-600000 ssh -n multinode-600000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-600000 ssh -n multinode-600000-m03 "sudo cat /home/docker/cp-test.txt": (8.2559824s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-600000 ssh -n multinode-600000 "sudo cat /home/docker/cp-test_multinode-600000-m03_multinode-600000.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-600000 ssh -n multinode-600000 "sudo cat /home/docker/cp-test_multinode-600000-m03_multinode-600000.txt": (8.3255553s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-600000 cp multinode-600000-m03:/home/docker/cp-test.txt multinode-600000-m02:/home/docker/cp-test_multinode-600000-m03_multinode-600000-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-600000 cp multinode-600000-m03:/home/docker/cp-test.txt multinode-600000-m02:/home/docker/cp-test_multinode-600000-m03_multinode-600000-m02.txt: (14.4146618s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-600000 ssh -n multinode-600000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-600000 ssh -n multinode-600000-m03 "sudo cat /home/docker/cp-test.txt": (8.3071048s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-600000 ssh -n multinode-600000-m02 "sudo cat /home/docker/cp-test_multinode-600000-m03_multinode-600000-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-600000 ssh -n multinode-600000-m02 "sudo cat /home/docker/cp-test_multinode-600000-m03_multinode-600000-m02.txt": (8.2547784s)
--- PASS: TestMultiNode/serial/CopyFile (318.07s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (68.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-600000 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-600000 node stop m03: (22.9765013s)
multinode_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-600000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-600000 status: exit status 7 (22.79272s)

                                                
                                                
-- stdout --
	multinode-600000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-600000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-600000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0507 19:47:38.219686    5572 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
multinode_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-600000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-600000 status --alsologtostderr: exit status 7 (22.8196631s)

                                                
                                                
-- stdout --
	multinode-600000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-600000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-600000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0507 19:48:00.993497    5424 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0507 19:48:01.052246    5424 out.go:291] Setting OutFile to fd 712 ...
	I0507 19:48:01.052962    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 19:48:01.052962    5424 out.go:304] Setting ErrFile to fd 892...
	I0507 19:48:01.052962    5424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 19:48:01.064217    5424 out.go:298] Setting JSON to false
	I0507 19:48:01.065202    5424 mustload.go:65] Loading cluster: multinode-600000
	I0507 19:48:01.065202    5424 notify.go:220] Checking for updates...
	I0507 19:48:01.065990    5424 config.go:182] Loaded profile config "multinode-600000": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 19:48:01.065990    5424 status.go:255] checking status of multinode-600000 ...
	I0507 19:48:01.066997    5424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000 ).state
	I0507 19:48:03.021111    5424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:48:03.021111    5424 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:48:03.021111    5424 status.go:330] multinode-600000 host status = "Running" (err=<nil>)
	I0507 19:48:03.021111    5424 host.go:66] Checking if "multinode-600000" exists ...
	I0507 19:48:03.022057    5424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000 ).state
	I0507 19:48:04.952195    5424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:48:04.952195    5424 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:48:04.952195    5424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000 ).networkadapters[0]).ipaddresses[0]
	I0507 19:48:07.209661    5424 main.go:141] libmachine: [stdout =====>] : 172.19.143.74
	
	I0507 19:48:07.209661    5424 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:48:07.209661    5424 host.go:66] Checking if "multinode-600000" exists ...
	I0507 19:48:07.224497    5424 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0507 19:48:07.224497    5424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000 ).state
	I0507 19:48:09.116421    5424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:48:09.116421    5424 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:48:09.117335    5424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000 ).networkadapters[0]).ipaddresses[0]
	I0507 19:48:11.349009    5424 main.go:141] libmachine: [stdout =====>] : 172.19.143.74
	
	I0507 19:48:11.349009    5424 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:48:11.349351    5424 sshutil.go:53] new ssh client: &{IP:172.19.143.74 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-600000\id_rsa Username:docker}
	I0507 19:48:11.448223    5424 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.2234389s)
	I0507 19:48:11.457308    5424 ssh_runner.go:195] Run: systemctl --version
	I0507 19:48:11.473212    5424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0507 19:48:11.500300    5424 kubeconfig.go:125] found "multinode-600000" server: "https://172.19.143.74:8443"
	I0507 19:48:11.500418    5424 api_server.go:166] Checking apiserver status ...
	I0507 19:48:11.510685    5424 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0507 19:48:11.541956    5424 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2056/cgroup
	W0507 19:48:11.559007    5424 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2056/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0507 19:48:11.567500    5424 ssh_runner.go:195] Run: ls
	I0507 19:48:11.574417    5424 api_server.go:253] Checking apiserver healthz at https://172.19.143.74:8443/healthz ...
	I0507 19:48:11.583572    5424 api_server.go:279] https://172.19.143.74:8443/healthz returned 200:
	ok
	I0507 19:48:11.583572    5424 status.go:422] multinode-600000 apiserver status = Running (err=<nil>)
	I0507 19:48:11.583572    5424 status.go:257] multinode-600000 status: &{Name:multinode-600000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0507 19:48:11.583572    5424 status.go:255] checking status of multinode-600000-m02 ...
	I0507 19:48:11.584829    5424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000-m02 ).state
	I0507 19:48:13.479122    5424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:48:13.479122    5424 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:48:13.479122    5424 status.go:330] multinode-600000-m02 host status = "Running" (err=<nil>)
	I0507 19:48:13.479824    5424 host.go:66] Checking if "multinode-600000-m02" exists ...
	I0507 19:48:13.480400    5424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000-m02 ).state
	I0507 19:48:15.365233    5424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:48:15.365233    5424 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:48:15.365913    5424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 19:48:17.590775    5424 main.go:141] libmachine: [stdout =====>] : 172.19.143.144
	
	I0507 19:48:17.590775    5424 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:48:17.591538    5424 host.go:66] Checking if "multinode-600000-m02" exists ...
	I0507 19:48:17.598807    5424 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0507 19:48:17.598807    5424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000-m02 ).state
	I0507 19:48:19.455420    5424 main.go:141] libmachine: [stdout =====>] : Running
	
	I0507 19:48:19.455420    5424 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:48:19.455827    5424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-600000-m02 ).networkadapters[0]).ipaddresses[0]
	I0507 19:48:21.692874    5424 main.go:141] libmachine: [stdout =====>] : 172.19.143.144
	
	I0507 19:48:21.692874    5424 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:48:21.693231    5424 sshutil.go:53] new ssh client: &{IP:172.19.143.144 Port:22 SSHKeyPath:C:\Users\jenkins.minikube5\minikube-integration\.minikube\machines\multinode-600000-m02\id_rsa Username:docker}
	I0507 19:48:21.788452    5424 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.1893607s)
	I0507 19:48:21.796904    5424 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0507 19:48:21.818213    5424 status.go:257] multinode-600000-m02 status: &{Name:multinode-600000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0507 19:48:21.818306    5424 status.go:255] checking status of multinode-600000-m03 ...
	I0507 19:48:21.818392    5424 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-600000-m03 ).state
	I0507 19:48:23.697801    5424 main.go:141] libmachine: [stdout =====>] : Off
	
	I0507 19:48:23.697801    5424 main.go:141] libmachine: [stderr =====>] : 
	I0507 19:48:23.698115    5424 status.go:330] multinode-600000-m03 host status = "Stopped" (err=<nil>)
	I0507 19:48:23.698115    5424 status.go:343] host is not running, skipping remaining checks
	I0507 19:48:23.698115    5424 status.go:257] multinode-600000-m03 status: &{Name:multinode-600000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (68.59s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (160.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-600000 node start m03 -v=7 --alsologtostderr
E0507 19:50:01.423455    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-527400\client.crt: The system cannot find the path specified.
E0507 19:50:06.651036    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\client.crt: The system cannot find the path specified.
E0507 19:50:23.415439    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\client.crt: The system cannot find the path specified.
multinode_test.go:282: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-600000 node start m03 -v=7 --alsologtostderr: (2m9.5284124s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-600000 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-600000 status -v=7 --alsologtostderr: (31.258639s)
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (160.94s)

                                                
                                    
x
+
TestPreload (485.47s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-437400 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4
E0507 20:05:01.486099    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-527400\client.crt: The system cannot find the path specified.
E0507 20:05:23.483173    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\client.crt: The system cannot find the path specified.
E0507 20:06:46.730105    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\client.crt: The system cannot find the path specified.
preload_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-437400 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4: (4m7.244543s)
preload_test.go:52: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-437400 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-437400 image pull gcr.io/k8s-minikube/busybox: (7.6151109s)
preload_test.go:58: (dbg) Run:  out/minikube-windows-amd64.exe stop -p test-preload-437400
preload_test.go:58: (dbg) Done: out/minikube-windows-amd64.exe stop -p test-preload-437400: (38.007619s)
preload_test.go:66: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-437400 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv
E0507 20:10:01.503410    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-527400\client.crt: The system cannot find the path specified.
E0507 20:10:23.500117    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\client.crt: The system cannot find the path specified.
preload_test.go:66: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-437400 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv: (2m25.6007836s)
preload_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-437400 image list
preload_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-437400 image list: (6.5270446s)
helpers_test.go:175: Cleaning up "test-preload-437400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-437400
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-437400: (40.4690736s)
--- PASS: TestPreload (485.47s)

                                                
                                    
x
+
TestScheduledStopWindows (305.68s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-806300 --memory=2048 --driver=hyperv
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-806300 --memory=2048 --driver=hyperv: (2m58.0608931s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-806300 --schedule 5m
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-806300 --schedule 5m: (9.4862808s)
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-806300 -n scheduled-stop-806300
E0507 20:14:44.754865    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-527400\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:191: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-806300 -n scheduled-stop-806300: exit status 1 (10.0162113s)

                                                
                                                
** stderr ** 
	W0507 20:14:41.499533    2624 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:191: status error: exit status 1 (may be ok)
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-806300 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:54: (dbg) Done: out/minikube-windows-amd64.exe ssh -p scheduled-stop-806300 -- sudo systemctl show minikube-scheduled-stop --no-page: (8.4330824s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-806300 --schedule 5s
E0507 20:15:01.518743    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-527400\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-806300 --schedule 5s: (9.5937031s)
E0507 20:15:23.516419    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\client.crt: The system cannot find the path specified.
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-806300
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-806300: exit status 7 (2.1343411s)

                                                
                                                
-- stdout --
	scheduled-stop-806300
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0507 20:16:09.561853    2100 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-806300 -n scheduled-stop-806300
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-806300 -n scheduled-stop-806300: exit status 7 (2.145662s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0507 20:16:11.720290    4508 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-806300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-806300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-806300: (25.8007977s)
--- PASS: TestScheduledStopWindows (305.68s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (1017.65s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube-v1.26.0.1270660113.exe start -p running-upgrade-820400 --memory=2200 --vm-driver=hyperv
version_upgrade_test.go:120: (dbg) Done: C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube-v1.26.0.1270660113.exe start -p running-upgrade-820400 --memory=2200 --vm-driver=hyperv: (8m0.1927108s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-820400 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
E0507 20:25:01.565934    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-527400\client.crt: The system cannot find the path specified.
E0507 20:25:23.564045    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\client.crt: The system cannot find the path specified.
version_upgrade_test.go:130: (dbg) Done: out/minikube-windows-amd64.exe start -p running-upgrade-820400 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: (7m33.8804475s)
helpers_test.go:175: Cleaning up "running-upgrade-820400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-820400
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-820400: (1m22.7762325s)
--- PASS: TestRunningBinaryUpgrade (1017.65s)

                                                
                                    
x
+
TestKubernetesUpgrade (977.41s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-886800 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:222: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-886800 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperv: (3m5.5603849s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-886800
E0507 20:20:01.536746    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-527400\client.crt: The system cannot find the path specified.
version_upgrade_test.go:227: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-886800: (33.0026656s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-886800 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-886800 status --format={{.Host}}: exit status 7 (2.1661478s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
** stderr ** 
	W0507 20:20:18.222264    2104 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-886800 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=hyperv
E0507 20:20:23.537055    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\client.crt: The system cannot find the path specified.
version_upgrade_test.go:243: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-886800 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=hyperv: (5m41.8722453s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-886800 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-886800 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperv
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-886800 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperv: exit status 106 (216.466ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-886800] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18804
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0507 20:26:02.425959   13880 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-886800
	    minikube start -p kubernetes-upgrade-886800 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-8868002 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.0, by running:
	    
	    minikube start -p kubernetes-upgrade-886800 --kubernetes-version=v1.30.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-886800 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:275: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-886800 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=hyperv: (6m4.6216532s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-886800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-886800
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-886800: (49.8277129s)
--- PASS: TestKubernetesUpgrade (977.41s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-728800 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-728800 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv: exit status 14 (312.174ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-728800] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18804
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	W0507 20:16:39.660669    7664 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.31s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.69s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.69s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (718.04s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube-v1.26.0.1051420797.exe start -p stopped-upgrade-938400 --memory=2200 --vm-driver=hyperv
version_upgrade_test.go:183: (dbg) Done: C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube-v1.26.0.1051420797.exe start -p stopped-upgrade-938400 --memory=2200 --vm-driver=hyperv: (6m1.4726746s)
version_upgrade_test.go:192: (dbg) Run:  C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube-v1.26.0.1051420797.exe -p stopped-upgrade-938400 stop
version_upgrade_test.go:192: (dbg) Done: C:\Users\jenkins.minikube5\AppData\Local\Temp\minikube-v1.26.0.1051420797.exe -p stopped-upgrade-938400 stop: (34.3175292s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-windows-amd64.exe start -p stopped-upgrade-938400 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
E0507 20:30:01.583351    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-527400\client.crt: The system cannot find the path specified.
version_upgrade_test.go:198: (dbg) Done: out/minikube-windows-amd64.exe start -p stopped-upgrade-938400 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: (5m22.244276s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (718.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (8.2s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-938400
version_upgrade_test.go:206: (dbg) Done: out/minikube-windows-amd64.exe logs -p stopped-upgrade-938400: (8.1950997s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (8.20s)

                                                
                                    
x
+
TestPause/serial/Start (467.12s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-774000 --memory=2048 --install-addons=false --wait=all --driver=hyperv
pause_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-774000 --memory=2048 --install-addons=false --wait=all --driver=hyperv: (7m47.1187915s)
--- PASS: TestPause/serial/Start (467.12s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (368.7s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-774000 --alsologtostderr -v=1 --driver=hyperv
E0507 20:45:01.640315    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\functional-527400\client.crt: The system cannot find the path specified.
E0507 20:45:23.644421    9992 cert_rotation.go:168] key failed with : open C:\Users\jenkins.minikube5\minikube-integration\.minikube\profiles\addons-809100\client.crt: The system cannot find the path specified.
pause_test.go:92: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-774000 --alsologtostderr -v=1 --driver=hyperv: (6m8.6625368s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (368.70s)

                                                
                                    
x
+
TestPause/serial/Pause (9.04s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-774000 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-774000 --alsologtostderr -v=5: (9.0439779s)
--- PASS: TestPause/serial/Pause (9.04s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (12.73s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p pause-774000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p pause-774000 --output=json --layout=cluster: exit status 2 (12.7275619s)

                                                
                                                
-- stdout --
	{"Name":"pause-774000","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-774000","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	W0507 20:51:13.452926    1168 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.

                                                
                                                
** /stderr **
--- PASS: TestPause/serial/VerifyStatus (12.73s)

                                                
                                    
x
+
TestPause/serial/Unpause (8.29s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p pause-774000 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe unpause -p pause-774000 --alsologtostderr -v=5: (8.2880151s)
--- PASS: TestPause/serial/Unpause (8.29s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (8.93s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-774000 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-774000 --alsologtostderr -v=5: (8.9319485s)
--- PASS: TestPause/serial/PauseAgain (8.93s)

                                                
                                    

Test skip (30/209)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false windows amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-527400 --alsologtostderr -v=1]
functional_test.go:912: output didn't produce a URL
functional_test.go:906: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-527400 --alsologtostderr -v=1] ...
helpers_test.go:502: unable to terminate pid 8580: Access is denied.
--- SKIP: TestFunctional/parallel/DashboardCmd (300.03s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (5.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-527400 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:970: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-527400 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0453305s)

                                                
                                                
-- stdout --
	* [functional-527400] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18804
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	W0507 18:25:37.360907    7920 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0507 18:25:37.432170    7920 out.go:291] Setting OutFile to fd 832 ...
	I0507 18:25:37.432170    7920 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 18:25:37.433171    7920 out.go:304] Setting ErrFile to fd 984...
	I0507 18:25:37.433171    7920 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 18:25:37.457173    7920 out.go:298] Setting JSON to false
	I0507 18:25:37.462706    7920 start.go:129] hostinfo: {"hostname":"minikube5","uptime":22255,"bootTime":1715084081,"procs":196,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0507 18:25:37.462706    7920 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0507 18:25:37.472829    7920 out.go:177] * [functional-527400] minikube v1.33.0 on Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0507 18:25:37.477822    7920 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0507 18:25:37.477274    7920 notify.go:220] Checking for updates...
	I0507 18:25:37.481888    7920 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0507 18:25:37.485222    7920 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0507 18:25:37.488215    7920 out.go:177]   - MINIKUBE_LOCATION=18804
	I0507 18:25:37.492308    7920 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0507 18:25:37.496320    7920 config.go:182] Loaded profile config "functional-527400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 18:25:37.498318    7920 driver.go:392] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:976: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/DryRun (5.05s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (5.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-527400 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-527400 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0208215s)

                                                
                                                
-- stdout --
	* [functional-527400] minikube v1.33.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	  - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=18804
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	W0507 18:25:32.310240    2936 main.go:291] Unable to resolve the current Docker CLI context "default": context "default": context not found: open C:\Users\jenkins.minikube5\.docker\contexts\meta\37a8eec1ce19687d132fe29051dca629d164e2c4958ba141d5f4133a33f0688f\meta.json: The system cannot find the path specified.
	I0507 18:25:32.369243    2936 out.go:291] Setting OutFile to fd 724 ...
	I0507 18:25:32.370243    2936 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 18:25:32.370243    2936 out.go:304] Setting ErrFile to fd 1004...
	I0507 18:25:32.370243    2936 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 18:25:32.389242    2936 out.go:298] Setting JSON to false
	I0507 18:25:32.393238    2936 start.go:129] hostinfo: {"hostname":"minikube5","uptime":22250,"bootTime":1715084081,"procs":195,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.4291 Build 19045.4291","kernelVersion":"10.0.19045.4291 Build 19045.4291","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"b047c2aa-b84e-4b82-894c-ed46f3580f4d"}
	W0507 18:25:32.393238    2936 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0507 18:25:32.399235    2936 out.go:177] * [functional-527400] minikube v1.33.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.4291 Build 19045.4291
	I0507 18:25:32.401240    2936 notify.go:220] Checking for updates...
	I0507 18:25:32.405265    2936 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube5\minikube-integration\kubeconfig
	I0507 18:25:32.407244    2936 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0507 18:25:32.410261    2936 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube5\minikube-integration\.minikube
	I0507 18:25:32.415245    2936 out.go:177]   - MINIKUBE_LOCATION=18804
	I0507 18:25:32.420242    2936 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0507 18:25:32.423242    2936 config.go:182] Loaded profile config "functional-527400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0507 18:25:32.425249    2936 driver.go:392] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:1021: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/InternationalLanguage (5.02s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:57: skipping: mount broken on hyperv: https://github.com/kubernetes/minikube/issues/5029
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:230: The test WaitService/IngressIP is broken on hyperv https://github.com/kubernetes/minikube/issues/8381
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:258: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard